Meta Secures Top AI Researcher from Apple
In a recent development, Meta has successfully recruited a top AI researcher from Apple, signaling the ongoing competition among tech giants to secure talent in the field of artificial intelligence. This move highlights the growing importance of AI in shaping the future of technology and business.
The Future of Professionals Roundtable
During the “Future of Professionals” roundtable held at the Oxford University Said School of Business, an insightful discussion took place on how companies evaluate the performance of AI systems. Most organizations use human performance as a benchmark for AI, but this approach can become complex when considering specific applications.
Simon Robinson, executive editor at Reuters, shared that his company has committed to using AI only if it performs better than humans on average. For instance, they have started using AI for translating news stories into foreign languages, as the software now produces fewer errors than human translators.
However, this standard may not always be appropriate. Utham Ali, global responsible AI officer at BP, discussed an experiment where a large language model (LLM) was tested against safety engineering exams. While the LLM scored 92%, which is above the pass mark, the 8% of questions it missed raised concerns. The lack of transparency in why the AI made those mistakes led BP to hesitate in deploying it, especially in high-stakes environments.
Performance Criteria and Risk Assessment
The challenges of deploying AI in high-risk domains are significant. While AI systems may outperform humans on average, their performance on critical decisions might not be sufficient. For example, in medical imaging, an AI system could be better at detecting anomalies overall but might miss more aggressive cancers. This highlights the need for a nuanced evaluation of AI performance beyond average metrics.
A recent research paper emphasized the limitations of current AI models. It found that adding seemingly irrelevant phrases to math problems can drastically reduce the accuracy of AI reasoning models. This raises questions about the reliability of AI systems and the need for further research into their decision-making processes.
AI as an Alien Entity
AI systems are often compared to various roles, such as interns or mentors. However, the best analogy might be that of an alien. Like the Coneheads from Saturday Night Live, AI can be brilliant at certain tasks but lacks the human understanding and thought processes we are familiar with. This alien nature poses challenges for trust and acceptance, particularly in areas where mistakes can have severe consequences.
Self-Driving Cars and Public Acceptance
Self-driving cars exemplify the tension between AI’s potential benefits and public unease. While autonomous vehicles could reduce accidents, their unpredictable behavior—such as sudden lane changes or misidentifying objects—can cause discomfort. Society’s preference for human drivers, despite their fallibility, underscores the value placed on control and predictability.
Upcoming AI Events and Resources
For those interested in exploring AI further, there are several opportunities available. A U.S. paperback edition of “Mastering AI: A Survival Guide to Our Superpowered Future” is now available from Simon & Schuster. Additionally, CRMFeedBrainstorm AI Singapore will take place on July 22 and 23, featuring discussions on AI agents, data centers, and responsible deployment. Attendees can use the discount code BAI100JeremyK for complimentary tickets.
As AI continues to evolve, the conversation around its deployment and impact remains crucial. Understanding the complexities and limitations of AI is essential for navigating its role in our future.

