Dennis Boyle, at Boyle & Jasari
Please meet Dennis Boyle, white collar criminal defense attorney, and founding partner at Boyle & Jasari.
In his interview, Mr. Boyle explains his practice and expertise. He also highlights which financial market practitioner will benefit from his experience. Last, Dennis explains the benefits of experts, notably from Navesink International, and how to select the right expert.
Generative AI
This interview had a double purpose. The first is to introduce our client, a talented criminal defense attorney, specializing in financial markets.
The second is to demonstrate a positive use of GenAI, generative artificial Intelligence. GenAI created all the video clips in this interview, only from pictures and sound recordings. The clips were edited into a full interview like any normal interview. The quality of the clips is not at the “deepfake” level (and is not intended to be), but the process has spared Dennis Boyle and Navesink the costs of renting a movie studio, hiring cameramen and sound engineers, as well as flights and hotels.
This unusual and new interview approach suggests a discussion on the progress of Artificial Intelligence, its current capacities and its drawbacks.
What is AI?
“Artificial Intelligence”, a much more attention-grabbing name than its original name “advanced automatons“, is the natural evolution of computer language programming. What brought the leap forward in efficiency is the use of Big Data.
The first advanced automatons were “expert systems“, such as TurboTax. The programmer, collaborating with accountants, coded an endless series of “If A and B then C” to describe all the possible situations in a tax declaration. Once deployed, the program uses a user’s specific situation, follows the logic and calculates his/her income tax. The touch of intelligence comes from the testing of multiple possible venues. By systematically checking all possible deductions, the computer finds the minimum tax, and makes a recommendation to the taxpayer. The same approach was deployed in the first medical expert systems. Doctors, in conjunction with developers, identified the significance of each line of blood tests and many other symptoms.
Expert systems were certainly a valuable improvement from the past. They solve reasonably well specific problems, in line with the logics that have been coded by the experts. They are therefore predictable but will never achieve better results than what the experts have patiently coded. Since medical diagnosis needs more experience and human touch, than a purely rule-based accounting topic, a medical expert system will be less accurate than a TurboTax expert system. In any case, the coding of such program is tenuous and expensive.
Modern AI uses far less code. Instead, they reproduce what they see from millions of similar situations. Let’s try for instance to recognize an individual from his face picture. That analysis is done with neural networks, where a neuron (a tiny bit of code) calculates the weighted average of the colors of a few pixels, before passing its result to the next neuron, doing its own weighted average, etc. Layers upon layers of such mini-programs are progressively able to discern an eye and an ear, before matching those features to the right individual in the memory bank. The programmer who created the AI never entered the formula for the mini-programs. Instead, the program has “learned” the weights of each tiny formula by looking at millions of pictures. The efficiency of the system is not in the code, but in each of the mini-formula in the millions of neurons.
The process of teaching the mini-formulas to the neurons from the million of the pictures is called “learning“. When the neurons are organized in many layers, the process is called “deep learning“. If we create artificial data to help the computer learn, the process is called “reinforced learning“. If the program fights against a copy of itself in millions of chess matches, the process is called “adversarial learning“. When the computer is then used to create new pictures, documents, or sentences, the program is called “generative AI“.
What changes from one AI program to another, is
- the nature of the input data: Pictures? Sentences? Sound recordings?
- which features are analyzed: Pixel colors, or calculated features form the pictures (say an analysis of all the pixel colors)? In other words, are the features simply observed or calculated?
- and how we drive the learning process: Are we trying to match the features of a given song, or matching the choices of other listeners who liked that song?
Performance
How much better is AI by comparison to expert systems? Well, it depends on the problem asked, but the advent of large database, the years of research on how to best connect the millions of neurons with each other, and the fast/powerful hardware to teach the tiny formulas make AI generally performs much better in a growing number of situations. That’s how Siri and Alexa recognize words in our voices. That’s how computers recognize road signs in videos. That’s how robots can walk. AI can now handle much more complex tasks than calculating income taxes. AI is better than expert systems in most cases.
How much better is AI by comparison to humans? That is a more complicated question. Computers can ingest a lot more data than a human would in a lifetime, but the problems to solve may be extremely complicated – it takes years for a child to learn to talk and to walk. Computers are pretty good at talking nowadays, but still far less at walking. On the other hand, computers fly planes as efficiently as humans; we just prefer to have that human pilot at the commands. Computers have definitely beaten humans at chess and go; they even invent moves that no human has ever seen. Meanwhile, there are so many unexpected situations on a road, that learning to drive has taken many years of efforts. Nevertheless, cars are now better drivers than humans. So, AI beats humans in some areas, but not in others. AI’s progress is not slowing, and new areas keep on being added to the list.
The weaknesses
There are many areas of concerns with these new systems, for instance:
- Expert systems are predictable, but will be limited in complexity to what has been programmed. On the other hand, AI can solve situations they were not coded for, but the result is less resilient. They may go berserk without notice – ChatGPT is known to hallucinate (invent stuff). Worse, we can’t know in advance when bad solutions will show up.
- And we certainly cannot say why – AI results are virtually impossible to interpret, and therefore to audit.
- AI has biases. An AI trained on Vogue, Elle and Fashion will tell you that beauty exists only in thin, blond women with blue eyes. In other words, this AI would show gender and race biases because of its training.
- We can add filters and restrictions to reduce obvious biases, but computer will find proxies and keep on having biases: The AI deciding bails for accused criminals is not supposed to use, nor is fed, racial information, but zip codes will have a similar influence.
- The deviations can originate from multiple causes. Sometimes, the unexpected outcome is due to the users. The same chatbot from the same company (Microsoft) became XiaoIce, a lovely companion in China, and Tay, a racist commentator in North America. The cause was how differently users interacted with the bot upon its deployment.
- Personalizing an AI to each user also has perverse effects: Personalizing a user’s newsfeed on Facebook creates echo chambers, where some users congregate into a small community of individuals mutually reinforcing each other’s views.
- The company creating the AI may be untrustworthy in that regard: reducing echo chambers requires that Facebook includes diverging or random opinions in personalized feeds. This diversity also reduces user engagement and Mark Z. is not too keen on reducing his income.
- Echo chambers are also driven by users’ choice of friends and clicks, which are human behaviors… Facebook may know us better than we are willing to admit.
- AI can be manipulated: Social medias have fake profiles, whose puppet masters use to drive and reinforce content for the general public. Wars and elections are now waged in the court of public opinion through social media.
- AI may reduce diversity: “Other users also purchased” prevents unknown products from ever being discovered and concentrates sales into fewer products. On Amazon, the rich-gets-richer. This concentration behavior will hurt the diversity of human ideas and cultures if deployed in public discourse.
- Social studies indicate that full transparency may not improve trust: explaining why AI randomly gave a discount to another user (to reduce concentration or train the model) may not give *you* personal satisfaction.
- AI can be used for nefarious purposes: Deepfake permits impersonation. Social influence and theft are their unpleasant outcomes. AI creators have concerns about offering video creation tools to the general public because of these possible crimes. The interview above is not trying to be perfect for this same reason.
- etc, etc.
Should we be afraid?
The silex helped cut food and trees. Today, knives are in every kitchen and every workshop. Unfortunately, knives are also the murder weapon of choice. Locomotives were going to create unemployment, but high-speed TGVs trains are a blessing. Nuclear energy heats our homes but will kill millions in a flash. Should we be afraid of AI? A technology is not a danger in itself. It is how we use it that drives its overall benefit.
A few more ideas:
- Humans are sometimes demanding too much from AI. 93% of drivers consider themselves better drivers than the median driver. We forgive ourselves for taking the wrong turn but will not forgive the bad itinerary choice of our GPS. We have many car accidents, but a Tesla involved in a crash makes headlines. Our (mis)trust in AI is not always objective.
- We construct those AIs through our clicks, and users may be to blame for the model’s weaknesses.
- As a result, users need to be aware of both AI’s – and their own – weaknesses. That will take some time to pass through.
- Coders may not know the implicit biases of their AI. But at the very least, programmers should actively assess and limit weaknesses before deploying a model. Should those biases be explained at the same time?
- Users should have a right to control how their data is used and by whom, as well as correct the data, if not model outcomes.
- Programmers should disclose the provenance of their data.
- Our regulations should balance between the rights of authorities and creators. Creators may abuse their position and know-how, but authorities also need industry experts to audit and decide. We should not prevent self-driving research because of one single accident either.
- Some settings are more critical than others. Healthcare and transportation are more critical than music recommendations. If regulations cannot differentiate priorities between areas, maybe jurisprudence will.