Lakera Secures $20M to Shield Enterprises from Generative AI Threats and LLM Vulnerabilities

Lakera Secures $20M to Shield Enterprises from Generative AI Threats and LLM Vulnerabilities

Lakera, a Swiss startup devoted to shielding generative AI systems from fraudulent prompts and other dangers, has raised $20 million in a Series A fundraising round, lead by European venture capital firm Atomico.

Although generative AI has acquired a lot of traction—as demonstrated by well-known apps like ChatGPT—security and data privacy concerns continue to be key problems for businesses. Generative AI is powered by large language models (LLMs), which allow machines to produce and comprehend text in a manner akin to that of humans. Malicious prompts, however, have the ability to deceive these models into carrying out unwanted tasks, like disclosing sensitive information or allowing unauthorized access to private networks. With Lakera, these increasing "prompt injection" risks are to be addressed.

Having been established in 2021 in Zurich, Lakera began operations in October of last year with $10 million in seed money. Its goal is to shield enterprises against LLM security flaws like as rapid injections and data leaks. Their technology works with several LLMs, including as Claude from Anthropic, Google's Bard, OpenAI's GPT-X, and Meta's Llama. As a "low-latency AI application firewall," Lakera claims to be able to secure communication between and within generative AI applications.

The database used by Lakera's first product, Lakera Guard, was created using a variety of sources, including internal machine learning research, open-source datasets from websites like Hugging Face, and an interactive game called Gandalf that tests players' ability to trick the system into disclosing a secret password. Lakera's "prompt injection taxonomy," which classifies these attacks, was developed with the assistance of these interactions.

In order to quickly identify malicious prompt injections, the business develops its own models. By continuously learning from a large number of generative AI interactions, these models are able to recognize dangerous tendencies and adapt to new threats.

Businesses can safeguard themselves from harmful prompts by integrating the Lakera Guard API. Furthermore, Lakera has created models that look for harmful content, such as profanities, violent content, hate speech, and sexual content, in prompts and application outputs. Although they are used in other situations as well, these detectors are especially helpful for apps that interact with the public, such as chatbots. With just one line of code, businesses can incorporate Lakera's content moderation features and set specific content thresholds via a centralized policy control panel.

With this fresh capital of $20 million, Lakera intends to increase its global footprint, particularly in the United States. Notable clients of the business already exist in North America, such as the Canadian unicorn Cohere and the American AI startup Respell.

Securing AI applications is a goal shared by SaaS providers, large corporations, and providers of AI models. Although interest comes from a variety of industries, financial services businesses are early adopters and particularly aware of the security and compliance threats. To stay competitive, the majority of businesses understand that they must integrate generative AI into their fundamental business operations.

In addition to Atomico, Redalpine, Citi Ventures, and Dropbox's venture capital division participated in Lakera's Series A investment.

Code Labs Academy © 2024 All rights reserved.