The Financial Revolutionist

View Original

AI adoption: Balancing competition with the public interest

Dmitry Lesnik is co-founder and chief data scientist at Stratyfy, a firm that offers AI-powered predictive analytics and decision management solutions to small and medium-sized financial institutions.

With the rapid, widespread adoption of artificial intelligence (AI) we must ask: How are our thoughts, processes, and lives changing with this technology?

We’re already seeing misuse by scammers and autocrats. Discrimination and social polarization, along with the concentration of power among a few entities, are just a few of the unintended consequences of AI adoption. To address these challenges, the prioritization of safety, accountability, and human oversight (e.g. human-in-the-loop) can allow AI to serve as a tool for sustainable and equitable progress.

Looking at large language models (LLMs), there are some practical concerns. While current LLMs and similar technologies won't lead to artificial general intelligence (AGI), they could be extraordinarily impactful. This raises the question: How will social interactions, truth, and trust evolve, especially as synthetic agents increasingly dominate online communication?

LLMs are inherently incapable of logical reasoning, relying on statistical patterns that cannot guarantee accuracy. If reinforcement learning prioritizes efficiency, how can we trust their explanations? Without careful oversight, such weaknesses could expose humans to manipulation, whether intentional or not, underscoring the need for collective action and comprehensive strategies to maintain control.

From tech giants to small startups, AI’s trajectory and societal impact are shaped by competing priorities oscillating between short- and long-term goals. The dynamics of this multi-agent interaction are particularly insightful when viewed through the lens of game theory.

At a micro level, many decisions are shaped by economic and political forces, whereas on a larger scale, they follow patterns similar to evolutionary laws with mathematical determinism. As a technological arms race takes off, the push for rapid development outweighs risk considerations, fueled by a winner-takes-all mentality.

While there are encouraging signs of change, the industry requires a systematic framework to balance competition with safety and long-term societal interests.

Addressing the Risks

To move beyond potential negative outcomes, we need to focus on two key areas:

  1. Regulation

Effective regulation lets us balance innovation with safety. Just as traffic lights, licensing, and penalties encourage safe behavior, AI regulations can — and should — promote accountability and incentivize compliance. The goal is to maintain a balance between fostering innovation and effectively addressing potential risks.

2. Tools for safer AI development

Equipping developers with tools for safer AI development is essential, including mechanisms for interpretability, human-in-the-loop oversight and logical rules-based reasoning. Algorithms should be interpretable and accountable, with responsibility shared across the AI value chain. Safeguards are necessary to avoid amplifying biases and confirmatory thinking, enabling  misplaced confidence in AI outputs. As the power of AI increases, so does the responsibility to ensure safety and maintain public trust. 

By prioritizing these approaches, we can mitigate risks and create a more equitable and secure AI ecosystem. 

At the same time, AI must exist beyond task-specific optimization. Current systems, like deep learning models and LLMs, lack true understanding and the ability to manipulate logical constructs. Incorporating logic and rules into AI will enhance decision-making, improve transparency and enable systems to function within human-comprehensible frameworks. 

The acceleration of AI’s development is transforming nearly every facet of our lives, and with adoption timelines shrinking, we must focus on maximizing its positive impact while preventing misuse.

Achieving this requires informed and deliberate human control, guided by critical thinking, accountability and collaboration. Human oversight must remain central to AI deployment, ensuring decisions are transparent and aligned with societal values. By fostering innovation with  appropriate safeguards, we can align AI development with humanity’s best interests. The ultimate goal is not just risk management but shaping a future where technology enhances human capabilities, enriches lives, and promotes equity and sustainability.