AI and the AI-first national strategy

@Time just ran an article and linked it to a ““Superintelligence Strategy” in a new paper released this week by @DanHendrycks @EricSchmidt and @AlexanderWang
While I agree we should bring an awareness of "Mutual Assured AI Malfunction (MAIM)" I do not agree completely with the authors that it is similar to the Mutual Assured Destruction (MAD) framework used during the Cold War.
First, AI is not like nuclear as it is more fragmented and hence this raises the governance challenge. Please do contact me if you want to have a discussion on this. For example, it is very tough to maintain human control in agentic AI. The paper uses the examples of drones. If you are interested in LAWS - here is a paper we did on Human Decision-Making in autonomous weapons: https://ieeexplore.ieee.org/document/10707139?trk=public_post_comment-text
Second the paper assumes it is all about AI chips and its choke points– this is not entirely true. AI is hardware, data, software and human talent. The paper assumes uranium was a lynchpin of power like AI chips (p. 5). Please read Ed Conway’s book on "Material World" to understand the level of fragmentation on silicon chip manufacturing. There are other great books on algorithms and talent also you should familiarize yourself with. So even if your brought manufacturing to one country, you will still have a problem of a choke point. Further as you use AI technology in everything, you increase your cybersecurity vulnerabilities, so the spending (when many countries are struggling with budget deficits) will disproportionately increase draining money away from government spending on basic access to health, education, housing, retirement savings, other public goods (everything an average person needs to survive and thrive).
Here, in the article, the premise is that you need an AI-first strategy (something being echoed across the world). Let me go back to the assumption that when you run AI-defense capabilities like the nuclear Manhattan project you increase the lack of transparency and hence accountability. In nuclear there is a much greater control on supply chain and missing raw materials are far more easily flagged. In AI - access is broad, across countries, data/hardware is often open and accountability is poor (weapon imports are a major problem - we can have a whole discussion on lack of accountability on this).
Another challenge with the paper is that much of the technology is dual use - hence it comes together in ways you cannot imagine. So it is not an issue of super-intelligence, it is about the focus on AI-first strategies, huge amounts of money from defense flowing into AI, testing of AI on battle fields (we know two countries this is happening right now), and the poor understanding of what AI is and can do and what AI is not and should not do (policy makers, media, public, decision makers). For exmple the article says "As models get better, they make fewer basic mistakes and become more reliable." This is incorrect - they need to be constantly trained and retrained and weights need to be shifted as more knowledge feeds into the system. We have an issue where errors rates are being accepted for global populations! A 0.01% error on 1 million is differnt from 0.01 on 100 people!
The paper mixes up political strategy, military defense, economic superiority and national competitiveness. A potent mixture that is highly combustible. It is right in flagging the threats but to deescalate the situation, we need a more stable and less paranoid world. If AI is connecting previously isolated systems – health, education, business, finance at a global level at superhuman speeds of data transfer with little accountability, and with its known challenges of algorithmic bias, lack of oversight, data fragility and biases and its infringement on human rights – what can we do better? Safety researchers or lawyers alone cannot solve human problems unless they are on the field - you needs political scientists, governance experts, sociologists, educators, sutainability experts, philosophers, a diversity of experts to address AI governance issues.
What can we do better?
Do please reinstall your AI ethics, governance and oversight teams
Have more transparency on AI dual use projects
Please be fair in reporting facts without sensationalizing AI
Make sure all AI decision makers (policy, businesses, users) have a robust understanding of their human rights and the ways AI infringes on that
Have more transparency and legal redress on Terms of References AI companies / data aggregators use to harvest data to sell for military use
If you are interested in more AI topics – read www.melodena.com
コメント