top of page

Autonomous Avatars & Digital Me



According to Kozuka, “There can be three possible approaches to the law of avatars. They are to treat an avatar either as a device (thing) deployed by a person or corporation in the real world, as an agent of the latter, or as an entity with distinct legal personality.”  In the future, a ‘digital me’ may work on my behalf and make decisions on my behalf (if it is not already doing so). Today, these decisions would most likely be precise – like automatic bank transfers to pay bills (in this case, is this a standing instruction, or is each time an invisible ‘digital me’ authorizing these transactions?). In this case, the AI is my agent, making a payment since I instructed it.

 

What happens if I use an LLM, trained or biased with my customized input, to make decisions? Here, we may need to separate a ‘digital me’ from an ‘avatar,’ which is the representation of the self in a given virtual reality. An avatar does not need to make decisions (in fact, in most virtual games/worlds, it is more of a cosmetic representation of the self). This suggests we may need to look at the categorization of avatars (here is one example). This categorization also needs to acknowledge the complexity of the systems – as Yoon, Lee, and Shin have shown in the diagram below. Should a ‘digital me’ be photorealistic? Would we ascribe more responsibility to ‘digital me’ than an AI agent or avatar? Responsibility and law tend to overlap.

 

 


What could go wrong? Take the argument that an autonomous avatar or ‘digital me’ can be perceived as multiple manifestations of an individual. Consider the interesting case of State v. Milligan.[1] In this court case, William Stanley Milligan, or Bill Milligan, was arrested in October 1977 for kidnapping, stealing, and the rape of three students from the Ohio State University (OSU). The trial, which took place in December 1978, found that Bill had multiple personalities (later classified as a dissociative disorder in 1933), where one of the personalities robbed, and the other kidnapped and raped the victims. A judge acquitted Milligan, though another sentenced him to a mental institution. In this case, what may happen when a fragment of a person misbehaves, gets kidnapped, etc.? This was not the case in subsequent cases, State v Grimsley (1982) and Kirkland v State (1983).

 

This raises interesting questions for rogue avatars. Even if the model is a digital twin and programmed to act like a person – can it really be autonomous, and who would bear the legal responsibility, especially, let’s say, if the physical person has died? Why is this important – regulations and laws can be anticipatory, and we should consider the complex spaces between what can happen, what can be prevented, and responsible technology.

 

Enough data is publicly available to create a ‘digital me’ or a deep fake. A deep fake is a synthetic creation of a person using specific characteristics of the person, image, or voice (suggesting we need to change our copyright laws). Remote work, according to KPMG, poses increasing concerns about deep fakes.  A financial officer recently transferred US$ 25.6 Mn to an account after a virtual meeting with a CFO and colleagues, all deep fakes. With the confluence of public and private sectors in managing areas that have been governmental responsibilities – it gets messy. When the data of 170 million Koreans and foreigners was provided to a private company for an airport immigration facial recognition without consent – it put the Korean Ministry of Justice and Ministry of Science and ICT in hot water.  KPMG highlighted this fact below.




It will be easier to create deep fakes of a ‘digital me.’ There are very few bills on misrepresentation of a person’s image or the legality of a person’s image. The data brokerage industry is getting bigger, and with the amount of public, government, and private data available to each individual, also has serious safety issues. CISCO predicts that 40% of data comes from IoT devices worth US$ 8 trillion by 2022. While the new UN resolution on AI adopted in March 2024, is a feel-good step, AI systems are more complicated and need a whole of government and industry to look into data, hardware, and software across global supply chains. Especially since the resolution focuses on non-military domains and most AI developments originate for military use. Also, as mentioned in my previous blog posts, virtual worlds would require us to “rethink” human rights (see our preliminary report for the Council of Europe).

 

Image: Image by ImaArtist from Pixabay


[1] State v. Milligan, 40 Ohio St. 3d 341, 533 N.E.2d 724 (Ohio 1988)




コメント


bottom of page