Artificial Intelligence: Balancing Innovation, Ethics, and Regulation

Jun 17, 2023

The European Parliament has taken a big step forward by approving a new proposal for regulating Artificial Intelligence. This approval means that EU institutions will now work together to come up with a final version of the regulation. This regulation aims to set a worldwide standard for using artificial intelligence systems that prioritize human needs and reliability. Its goal is to protect critical areas such as health, safety, essential rights, democracy, legal requirements, and the environment from any potential harm caused by AI technology.

However, the approach taken in the regulation has sparked controversy. While it aims to regulate the impact and use of AI, it also extends its reach to the technology itself, including foundational or generative AI. This comprehensive regulation has raised concerns among some, who fear that it might impede innovation.

The EU’s quick reaction can be attributed in part to the extensive use of AI tools such as ChatGPT and the resulting controversies they have caused. As these AI tools have a significant impact on society, policymakers have become increasingly worried, and they have prioritized taking action to address the potential risks through the AI Act proposal.

CEOs from the AI industry, such as Sam Altman from OpenAI and Dennis Hassabis from Google Deepmind, have emphasized the crucial importance of prioritizing efforts to prevent AI-related risks worldwide, along with other significant global risks like nuclear war and pandemics.

There are different opinions on whether AI can save the world. Marc Andreessen, who helped create the first web browser, has raised concerns about AI’s role in light of the ongoing debate over its potential risks. Although many acknowledge the importance of managing the risks associated with AI, advocates caution against implementing strict regulations or outright bans. They believe that the fast development of technology will outpace regulatory measures, which could cause harm to both society and the economy.

The-three-pillars-of-governance

The three pillars of governance

The focus is on promoting responsible digitalization as a fundamental strategy in guiding the creation and application of technology, including generative AI. As an example, companies such as Telefónica have adopted an AI governance model that incorporates ethical principles and transparency mandates, through a “Responsibility by Design” approach.

In the future, EU institutions need to be aware of how AI can transform the economy and society positively. AI is a technology that can improve competitiveness in both regional and business contexts. Therefore, the AI Act that results from this should encourage innovation and help the internal market function better.

The use of AI technology is essential for businesses to remain competitive in the current data-driven era. It can lead to the creation of new business models, innovative services, improved efficiency in operations, and positive social impact. However, it is important to approach innovation in a human-centric and trustworthy manner. Telefónica’s public statement demonstrates its dedication to enhancing prosperity while protecting people’s rights and preserving societal values.

A governance framework consisting of three parts is suggested for attaining the goal. These three parts are global guidelines, self-regulation, and a regulatory framework that is appropriate. It is recommended that the EU AI Act have a narrow focus and be complemented with regional guidelines that comply with worldwide agreements and ethical principles upheld by both public and private stakeholders.

As AI becomes more widespread, it’s clear that individual countries can’t control how it’s used. This means we need international cooperation and guidelines to make sure everyone follows the same ethical principles and best practices. Luckily, there are already efforts to create voluntary codes of conduct and non-binding international standards, such as the AI Pact in Europe and collaborations between the EU and the US.

Self-regulation can be beneficial for AI development, particularly for non-high-risk applications. It can offer flexibility and efficiency in regulating innovation without hindering progress. Overall, self-regulation ensures the protection of individual rights, health, democracy, and safety in AI development. A priori regulations might not be effective due to the fast pace of innovation and the complex nature of AI.

Telefónica has adopted ethical principles across all its operations, from design and development to the use of AI products and services by employees, suppliers, and third parties. The “Responsibility by Design” approach ensures that ethical and sustainable criteria are incorporated throughout the value chain.

To sum up, regulating AI requires a broad approach that includes global teamwork, self-regulation, good public policies, and a regulatory system based on risk. This comprehensive strategy intends to reduce risks, promote ethical technology use, and create a strong foundation for innovation and economic progress. By finding the right balance between innovation, ethics, and regulations, we can shape a future where AI empowers us while respecting the values and rights of our society.