TECH & AI
Richard Harmon, Vice President and Global Head of Financial Services at Red Hat AI safety will become a growing area of focus and investment. A core element of this will be the need for firms to invest in an enterprise-wide AIOps platform that integrates a range of capabilities to support a robust model lifecycle management capability as well as security requirements.
As well as supporting AI Safety, this also helps give firms the agility to comply with rapidly evolving operational and regulatory requirements.
Open-source AI tools that are community-developed and carefully managed can ensure a high degree of operational resiliency. They can provide the ability to run critical AI workloads on any platform while providing the required accessibility and agility transparently and consistently across the enterprise.
Investments in wide-scale enablement and learning programmes are critical to a firm’ s ability to effectively and safely integrate gen AI capabilities into business workflow processes and customer-facing capabilities in a trustworthy manner.
Agentic AI offers exciting transformative capabilities, not only for executing complex workflows, but potentially supporting autonomous oversight capabilities as part of a robust AI Safety framework.
These capabilities can improve current methods used for monitoring, validating and correcting outputs that are inaccurate, biased, or misleading.
UNDERSTANDING“ AGENTIC AI COLLUSION”
Richard Harmon, Vice President and Global Head of Financial Services at Red Hat
“ I want to point out that there is a growing literature about the potential risks of agent collusion in future deployments in agentic platforms that enable large-scale interactions between autonomous agents.
“ My personal interest in this topic is focused on AI trading agents and the potential systemic risks associated with colluding agents destabilising markets- in similar ways that humans
have done in the not-so-distant past- e. g., the 2008 global financial crash.
“ Agentic collusion is becoming a core research focus on concerns about the potential for multi-agent deception, where AI agents use steganographic methods to hide their interactions from oversight.
“ This can lead to unintended information sharing or undesirable coordination. While not an issue today, it is a research topic with academics, AI developers and regulators who are looking at the broader requirements for AI safety.” fintechmagazine. com 133