The Legal Side of AI Investments: Risks and Restrictions for Venture Investors
Artificial intelligence (AI) is a technological sphere that greatly goes into the depth of human functioning, and is defined as the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. Also defined as: an intelligent entity created by humans capable of performing tasks without being instructed. It largely serves as an aid in various medical tasks, sound processing, writing articles, creating pictures, etc.
According to the current system of classification there are four primary AI types:
- Reactive. It is programmed to provide a predictable output based on the input it receives.(ex: spam filters for our email that keep promotions out of our inboxes)
- Limited memory. Uses observational data in combination with pre-programmed information to make predictions and perform complex tasks.(ex: autonomous vehicles use limited memory AI to observe other cars speed and direction for safer roads. It is important to clarify that it’s not saved in cars’ long term memory,that lives up to the name of this type.
- Theory of mind. Machines with theory of mind will be able to understand and remember emotions,then adjust behavior while interacting. This form is still being developed because of the variety of mimics that we as human beings have.
- Self aware. Machines that will be aware of their emotions,have a level of consciousness and intelligence similar to human beings. (underdeveloped due to lack of hardware to support it).
Many experts are worried that AI could bring negative consequences, such as the loss of jobs, privacy, easier surveillance and control of the masses, as well as the potential hidden behind the development of AI in terms of overriding human abilities and skills. However, in addition to the prevailing concerns, there are large investments in the AI technology sector.
Worldwide investment into AI companies have increased by 115% since 2020, marking the largest year-on-year growth in AI investment for at least two decades. Total AI investment reached $77.5 billion in 2021, a substantial increase from the previous record set 2020 year of $36 billion. The US remains a global leader in AI thanks to a world-beating commercial scene, a large talent pool, and stellar research initiatives. In second place is China, which supports the rollout of AI technologies with its robust infrastructure and ambitious government strategy, but falls behind when it comes to talent. The UK remains in third place due to its superb pool of home-grown researchers and a strong AI startup scene, but still has areas of weakness in its development and operating environment, including a costly visa regime. When it comes to funding per capita, however, Israel – which has knocked South Korea out of its 5th place spot – beats all other countries. Israel had $325,000 in funding invested in AI – focused companies for every million people. Eight additional countries have released government strategies on AI: Slovenia, Turkey, Ireland, Egypt, Malaysia, Brazil, Vietnam and Chile.
Humanizing robots. Overview of the Limitations of Artificial Intelligence:
- Not a suitable solution for all circumstances;
- Requires monitoring;
- Limited to pre-fed task;
- Maintenance and cost;
- Lacks creativity;
- Absence of privacy, safety, and ethics;
- Adversarial Attacks.
As technology develops, AI gets closer to actual consciousness. The United States already granted rights and legal responsibilities to non-human entities, namely corporations. It is not unfathomable that robots and machines utilizing AI will be granted the same. Facebook has already created AI sophisticated enough to develop their own, non-human language. Were the civil rights of these machines violated when Facebook decided to shut them down? If AI commits a crime, can the software itself be held liable? Switzerland faced that very problem when a robot bought illicit substances online.
Why is there so much interest in investing into AI technology?
Firstly, it was mostly government funding that pushed AI interest and research forward. Secondly, it was a combination of corporate and venture capital interest. Finally, AI funding seems to be coming from every corner of the market. When investing in a development of an AI startup company, we must emphasize several risks that come with potential investment, as well as the impossibility of following legal regulations that are not in accordance.
Potential risk concerns the rapid development and progress of technology and the impossibility of following the legal regulation thereof. So, when legal problems arise, they are most often a case of first impression. Lawyers who have AI cases falling into their laps will be treading uncharted territory, without a map, and handling cases before judges who may not understand the technology. Second potential risk is the problem that follows, finding fault due to an accident, violation of rights and similar circumstances. A smart car hits a pedestrian, who is the guilty party? The programmer in the office with the source code? The owner on the road with the car? The manufacturer in the lab with the testing protocols? These issues highlight the need for clear and effective legal frameworks in the field of AI.
Data privacy is important to all of us, however there is an increase in security threats and the financial impact of cybercrimes has increased by 78%, and the time to resolve them has doubled. AI already tracks and predicts individuals shopping preferences, political preferences, and locations. The data accumulated and shared between these technologies has already created many controversies within the legal field. AI already is starting to tackle more controversial subjects, such as predicting sexuality and propensity to commit a crime. Will these predictions be able to be used in a trial? Or will the AI serve as experts, to be cross-examined to determine the validity of their opinions?
Furthermore, investing in AI poses challenges as established companies such as Amazon, Netflix, Facebook, and Microsoft with their vast capital dominate the market, hindering new entrants from making a significant impact and achieving substantial ROI by acquiring their competition swiftly.
The primary regulatory bodies in the European Union are the European Parliament, the Council of the EU and the European Commission. Technical standardization is taking the lead on the regulation of AI through associations like the IEEE and ISO, and national agencies like the NIST in the US, and the CEN, CENELEC, AFNOR, Agoria and Dansk Standards in Europe. In these settings, one key issue is the extent of government involvement.
As for Canada, it does not currently have a comprehensive legal framework to regulate AI. In the public sphere, there is the Directive on Automated Decision-Making which imposes a number of requirements, primarily related to risk management, on the federal government’s use of automated decision systems.
China has taken the lead in designing AI regulations. The country has a number of broader schemes in place to stimulate the development of the AI industry, such as “Made in China 2025”, the “Action Outline for Promoting the Development of Big Data” (2015) and the “Next Generation Artificial Intelligence Development Plan” (2017). In recent years, China has also fastened the pace to promulgate specific policies to regulate AI, regarding industry ethics and algorithms. Chinese regulations implemented strict scrutiny of the country’s biggest tech companies, increasing oversight of data security and overseas listing policies. While the technology sector is still recovering from the impact of the crackdown since two years ago, some regulatory easing has been reported. Now facing more severe competition in the world, China looks again to its tech sector for more leverage. For more comparison of AI regulations check it here.
The AI act is a proposed European law on Artificial Intelligence – the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories:
- Applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned;
- High-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements;
- Applications not explicitly banned or listed as high-risk are largely left unregulated.
Have you ever heard of the Brussels effect? Compared to the European Commission’s proposal, the Council’s approach includes the following key changes:
- The definition of AI is narrowed to systems developed through machine learning approaches and logic and knowledge based approaches. This is to easily distinguish AI from simpler software systems. The European Commission will be empowered to further specify or update what is covered by the A.I. Act, by adopting implementing acts;
- In relation to prohibited AI practices, the amended proposal extends the prohibition on using AI for social-scoring to private actors;
- The prohibition on the use of AI systems that take advantage of the vulnerabilities of specific groups are expanded to include vulnerabilities related to the social or economic situation of individuals;
- The provisions banning the use of real-time remote biometric identification systems in public spaces by law enforcement authorities have been clarified to exempt uses strictly necessary for law enforcement purposes in exceptional cases;
- The list of high-risk AI systems is expanded to include new use cases in critical digital infrastructure and life and health insurance, but also excludes some use cases (e.g. “deep fake” detection by law enforcement, crime analytics, verification of authenticity of travel documents);
- The requirements for high-risk AI systems are clarified to make them more technically feasible, for instance, in relation to the technical documentation to demonstrate compliance with the AI Act and the quality of data;
- Provisions are added to clarify the responsibilities of various actors in AI value chains used for development and distribution of AI.
- Some of the requirements for high-risk AI systems can now also apply to general purpose AI in specific situations, to be determined by an implementing act taking into account such factors as specific characteristics of the systems, its impact on the rights and freedoms of individuals, technical feasibility and market and technology developments. More about the Council calling for promoting safe AI that respects fundamental rights here.
Artificial Intelligence (AI) has the potential to revolutionize many industries and aspects of daily life. With the increasing advancements in AI technology, the possibilities for its applications are seemingly endless. AI systems are becoming increasingly sophisticated and can perform a wide range of tasks from image and speech recognition to decision-making. However, while AI holds immense promise, it is important to approach its development and deployment with caution. Ethical considerations such as privacy and the impact on jobs must be taken into account to ensure that AI is used in ways that benefit society as a whole. It is up to direct the development of AI in a responsible manner, maximizing its positive impact and minimizing its negative consequences.