The tech industry is racing towards an AI-laden future at a breakneck pace in a manner quite reminiscent of the microprocessor race of the late 70s. In this case we’re stepping beyond the ability to perform calculations quickly and have instead opened the floodgate to legal and ethical questions that seemed a century away before the turn of the millennium. Are we prepared to take the necessary steps to ensure AI developments work for the betterment of mankind rather than its detriment?
The AI Future is All But Assured
At this point in time, there’s little point in ruminating on a future in which machine learning isn’t a dominant force in many facets of multiple industries. Major corporations have been snapping up AI-related startups as quickly as possible and have even poached top minds from one another. Apple has already drawn AI specialists away from Google in a move that is clearly focused on raising the company’s stakes in near-future AI developments while also depriving resources from their competitors in the form of noted industry experts.
Making moves that are so open and public means there are few secrets left regarding just how quickly research and development will be ramping up for most facets of AI. As it stands, companies are already folding automated workers into accounting and data collation to remove the need for tedious and time-consuming efforts run by larger teams of human workers. Many of these jobs would be nearly impossible without machine intervention. For example, finding perfectly faceted diamonds that match an individual’s tastes and preferences would be a monumental task for any business without a helpful computer handling their cases.
How Europe’s Citizens Might Influence Future AI Regulation
Nothing can stop an industry before it starts quite as quickly as regulations that fail to address key issues while also overreacting to potential abuse. To counter this, experts in Europe have drafted an AI ethics proposal to help shape the ever-changing landscape of machine learning in a healthy way that doesn’t choke out the industry before it has a chance to truly shine.
To achieve better results, these experts have opened up to European citizens for comments and concerns regarding the draft and potential fringe cases they may not have addressed. Facial recognition and identity scanning remain their top concern and this revelation is less surprising following the public’s response to one of London’s first overt tests of recognition software through the Christmas season.
While the software failed to make any matches on suspects in its initial overt trial there is still an air of concern over just how far a full-scale rollout will go. Considering Europe is far from the only country testing recognition software, one has to wonder if China’s recent social score system will be further twisted to include facial recognition and tracking on top of its draconian ranking scheme.
On the softer side of the equation are questions lobbed by experts to other experts. In one of the strangest yet most interesting cases, many research publishing outlets are pitching the idea of AI-vetted peer reviewing in the field of machine learning itself. There are just as many questions about how qualified AI may be to handle confirming and denying facts about itself, but the most bizarre outcome may be a machine learning algorithm that pushes itself into prominence by promoting or dismissing work that relates to its own existence in a biased manner.
While the positive applications of artificial intelligence remain many and varied, stemming the tide of abuse and poor management before it becomes rampant remains a vital component for experts and average citizens alike. Life could be much easier with machine learning algorithms taking on a more prominent role in society yet the very same technology could prove a dangerously slippery slope if not treated with the scrutiny it deserves.