top of page

When Technology and Public Purpose Meet



Yesterday I attended the Technology and Public Purpose (TAPP) Summit 2023, a project under Harvard's Belfer Center for Science and International Affairs. The TAPP Summit in Boston was in memory of their founder Ash Carter, the 25th US Secretary of Defense. The tributes to him were moving as he was an exceptional leader and boundary-spanner. To quote him,

"The bridges between tech-driven change and public purpose need to be repaired and restored."

I believe this statement holds true today more than ever.


Here are key takeaways from some of the panels and speakers (more later).



The panel, The CHIPS and Science Act: A New Era of U.S. Industrial Policy, had very insightful discussions led by Doug Calidas, Vice President and Policy Director of The Complete Agency. David Goldston, Director of the MIT Washington Office, highlighted that we needed to look at the CHIPS part (bringing back semiconductor manufacturing to the USA) and the Science part (investing in R&D) separately. Doug and Edylyn Levine, Chief Science Officer and co-founder of America’s Frontier Fund, were quick to stress that the Act gained movement with China’s ambitions and the growing awareness of USA’s loss in standing in major tech areas (that overlapped with defense).




Meg Rithmire, F. Warren McFarlan Associate Professor of Business of Administration in the Business, Government, and International Economy Unit at Harvard Business School, spoke about the China campaign “Made in China 2025” and the tremendous tolerance for experimentation the country had where every region was trying to create a fab or a quantum computer. This policy differed from that of the USA (and EU), where there is more oversight of budgets. John Park, Director of the Korea Project and Adjunct Lecturer in Public Policy at Harvard Kennedy School, raised important points on rethinking industrial policies in a globalized world. He stressed that the semiconductor industry needed a global supply chain, making it impossible to “go it alone.” There was a good discussion around finding common ground with other countries for less sensitive industries like clean tech. Hence the need for economic statecraft was a topic John highlighted. More importantly, he said, “de-risking” should not be confused with “de-coupling” (not advised).




I loved the discussion between Leisel Bogan, Director of the Senate Innovation and Modernization effort for TechCongress, and Jonathan Zittrain, Co-Founder of the Berkman Klein Center for Internet & Society. They began discussing whether Section 230 was still relevant. Section 230 in the USA protects the right to Freedom of Speech, which has been the umbrella under which Big Tech and many AI firms hide. Jonathon highlighted that social media spaces should be treated like Public Health. We cannot take an individualistic approach but a broader societal approach. I agree!


Jonathan recounted his journey with AI and his changing stance on governance since the dot.com era. The growing worry was around how we are using AI. The story he recounted was to help us draw parallels. Two days ago, a man was arrested for a bomb scare in Harvard. The individual said he responded to an ad in CraigsList and followed instructions to earn $300. He picked up and dropped a bag at the Science Center Plaza (thinking he was dropping off something for a family member), which then exploded. Jonathan highlighted that Chat GPT could, for example, send explicit instructions (based on some weird movie plot), and if we stopped thinking and kept doing what it advised, we may be in big trouble.

He stated the dilemmas of Generative AI were three-fold:

1. We (the government) do not know what is happening. For example, at one time, Deep Mind had 400 Post-Docs. The difference was that if they were in academia, they would have published a paper that all could access, but being in the private sector, most of their work gets hidden as a trade secret or a patent - giving very little transparency for oversight.

2. We (the government) do not know what we want. This complexity of dialogue – Nation first (industry and economic opportunity), safety, security (geo-political risk), sustainability, well-being, jobs, etc., makes it hard to have a clear line of sight on the mission.

3. We (the public) don't trust anyone to give it to us. We don't trust tech, but also we don't trust government. The worry was when you did not have trust, you defaulted to more tech to build trust, but that never addressed the underlying issue, making it more dangerous as the fabric of trust continues to tear.


Jonathon had some interesting suggestions, like rewarding tech for transparency (putting a limit on liabilities or even, in some cases, immunity). Get young high schoolers to debate ads and outputs from AI projects and social spaces (like Snapchat, Chatbots, Instagram, and Facebook) and allow them to ban content as part of school projects. It will help them think about global citizenship and other people’s rights and perspectives, especially if they work across countries. Get people to think – not accepting what they see or hear as an output (like deep fakes) but asking questions like where it came from, what is the person's past reputation, or could it be wrong (rather than it is right and sharing it immediately).


Another important topic of discussion was in a panel put together by Sarah Hubbard, (report here). She focused on Decentralized Autonomous Organizations (DAOs). DAOs are often touted as another form of governance. There are approximately 6000 DAOs as of June 2022, with contributions valued at $25 billion! However, the space is highly contested as there are not many use cases that separate impact (meeting objectives) from interventions. In the USA for example, some legislations are emerging (Wyoming, Tennessee, Vermont, or Colorado), but they are not yet all coherent. According to Sarah, the future focus needs to be on detangling legal complexity, retaining domestic innovation, and encourage self-governance. She also stressed a need to get more good use cases of impact.



We had Helena Rong talk about Decentralized Trust-Building Technologies or distributed ledgers. One of the challenges with these technologies like blockchain was that they were created to solve the trust problem (it is trustless – you no longer need to trust a human but you can trust a computer code). However, it means trust is diffused into a greater number of actors, many of whom are invisible. From a governance point of view, you need to look at the technology layer and the social layer (which we don't). More in her paper: here.


Being responsible for tech is not easy. Once you create it and put it out there, you need everyone to think more openly about governance and commit to it. Somewhere, between profits and security is public value, trust and global citizenship. We need to foster sustainability aiming for net positive! To recap, we need systems thinking, statecraft and a great deal of diplomacy.

Thanks for inviting me Amritha Jayathi!

Comments


bottom of page