AI Security
Protect your business against supply-chain attacks distributional shift poisoning attacks model stealing prompt injection detection evasion statistical biases model backdooring data leaks ransomwares data biases AI misuse
Production line with people and machines scaning

Artificial intelligence creates fantastic opportunities that can drastically increase the added value of your business.

But rushed integrations of poorly secured AI systems will expose your business to much greater risks.

Cyber risks must not be underestimated. Recall that the cost of cybercrime is evaluated at $11 trillion in 2023 alone.

People catching bugs in a window

Calicarpa can offer its unique, world-renowned AI security expertise to accompany your management and technical teams.

Our AI security solutions have been consistently published at the most prestigious scientific venues (NeurIPS, ICML, ICLR, etc).

Calicarpa's co-founders co-authored 5 out of the 9 papers on state-of-the-art model poisoning mitigations reported in the 2024 Trustworthy and Responsible AI report from the US National Institute of Standards and Technology.

Research

Calicarpa has been founded by experts in information security and science communication. Over the last 7 years, we initiated the field of robust distributed machine learning and developed practical algorithms and software systems.

We have been advancing the state-of-the-art consistently, publishing and presenting our research on machine learning vulnerabilities and defenses at the most pretigious conferences.

Please find asidebelow a selected list of our publications.

2023–now
Generalized Bradley-Terry Models for Score Estimation from Paired Comparisons
Association for the Advancement of Artificial Intelligence (AAAI)
Robust Collaborative Learning with Linear Gradient Overhead
International Conference on Machine Learning (ICML)
On the Strategyproofness of the Geometric Median
International Conference on Artificial Intelligence and Statistics (AISTATS)
2019–2020
Fast and Robust Distributed Learning in High Dimension
IEEE Symposium on Reliable Distributed Systems (SRDS)
Genuinely Distributed Byzantine Machine Learning
ACM Symposium on Principles of Distributed Computing (PODC)
AggregaThor: Byzantine Machine Learning via Robust Gradient Aggregation
Conference on Machine Learning and Systems (MLSys)
2017–2018
Asynchronous Byzantine Machine Learning (the case of SGD)
International Conference on Machine Learning (ICML)
The Hidden Vulnerability of Distributed Learning in Byzantium
International Conference on Machine Learning (ICML)
Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent
Advances in Neural Information Processing Systems (NeurIPS)
Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning
Advances in Neural Information Processing Systems (NeurIPS)
When Neurons Fail
IEEE International Parallel and Distributed Processing Symposium (IPDPS)
Consulting
Integrating AI-driven processes to your value chain?
People walking towards digital transformation metaphor

Leveraging AI to add value to your data is no small endeavor. On this journey, your company may both develop internal knowledge and partnerships with external vendors.

In any case, it is vital that the emerging solutions do not jeopardize your business integrity.

AI solutions, like classic software solutions, carry inherent security risks; some specific to the statistical nature of AI. As with classic software, security as an afterthought calls for disaster.

Your data and your brand reputation are strategic assets, which demand protection as such.

People worried from an apparent data loss
Personification of ideas and academic achievements

We are experts in AI security. We are science communication and training professionals.

Capitalize on our unique set of skills, and cultivate with us your own security expertise. Train your collaborators for both in-house and external AI integrations with security in mind.

Leverage our track record of world-class research on the topic of AI security.

Please see below the core concepts we offer as training.

Privacy
Privacy threat
Understand information leak risks through AIs
If your AIs learned from sensitive information, then their use may (indirectly) leak back any learnt secret. State-of-the-art privacy machine learning techniques mitigate the risks. But there is no silver bullet, and formal impossibility results underpin a fundamental privacy-accuracy trade-off.
  • Distrust large (language) models.
  • Favor multi-party computation (MPC).
  • Favor differential privacy (DP).
  • MPC does not protect from model leakage.
  • DP incurs performance degradation.
  • DP does not prevent indirect information leaks.
Evasion
Evasion threat
Is 99% accuracy good enough?
"AIs make mistakes" would be a fair summary in layman's terms. However, especially if AIs are applied to e.g. malware or fraud detection, attackers may find and use such mistakes to evade detection. Some defenses can mitigate these risks.
  • Cross-validation.
  • Data augmentation.
  • Adversarial training.
  • There are theoretical limits for out of distribution data.
  • Distributional shift eventually harms performance.
  • Attackers can exploit imperfections.
Bias
Bias threat
Will overlooked populations be harmed?
Previous AIs featured highly undesirable biases, labeling some populations as gorillas or associating others with violence. This is something upcoming regulations, like the AI Act, may monitor closely. Without a bias risk assessment, the misbehaviors of your AIs can put your business at risk.
  • Stratification.
  • Bias penalties.
  • Active learning tests.
  • Fairness has multiple facets.
  • Biases can be subtle.
  • May conflict with performance.
Poisoning
Poisoning threat
Unreliable training data creates unreliable AIs
Statistical models fit their training data, but what if this data is (partially) corrupted? By poisoning your training datasets, attackers may corrupt and reprogram your AIs. Attackers may typically force some specific outputs for some chosen inputs. Some backdooring attacks are even provably undetectable.
  • Data cleaning.
  • Data traceability.
  • Robust gradient aggregation rules.
  • Generative AIs combat data cleaning.
  • Impossibility theorems hold for large models.
  • Most retrained models are poisoned.
Supply chain
Supply chain
Importing code, models or data imports their vulnerabilities
Software supply chain risks, applied in the AI/ML context. What entails "importing" datasets from public repositories? It is "just data", right? In practice, datasets rely by design on arbitrary deserialization code, which may be malicious. And such malicious code runs on your developers' computers...
  • Software frugality.
  • Staff training.
  • Sandboxing.
  • Code-sharing is very widespread.
  • Provably undetectable backdoors exist.
  • Calicarpa sells a sandboxing solution.
Smart labeling
Smart labeling
Labeling is costly. You will want to do it cleverly.
Data labeling is often a key bottleneck in AI development. However, not all labeling will have the same value to your business. Cut your costs without reducing performance by seeking the labels that will upgrade your AIs the most.
  • Bayesian models.
  • Active learning.
  • Collaborative labeling.
  • Handling uncertainty is not easy.
  • Human reviewers are imperfect.
  • Some labeling tasks pose PTSD risks.
Product
External repository Internal repository C source C++ source C++ header Python source Your applications Open-source, external repository Corporate repository Not maintained anymore Legacy code
Which code
does your applications run?

Above 70% of the code in the codebase of most industries' applications is open-source software. Unless your company is like no other, your applications are mostly composed of open-source code.

In spite of being praised for its higher security and software quality, open-source is from year to year increasingly more used as a vector of supply chain attacks.

And this is leaving aside outdated or unmaintained open-source dependencies, an issue affecting 90% of the applications' codebases, which like legacy code definitely harms application security.

Your datacenters Your applications Your applications Your customers Your customers Your developers Your developers Your datacenters
Which infrastructures
run your applications?

Supply chain attacks may impact your developers first. This is a serious risk, that begins with the mere installation of a dependency, and goes on throughout the software development cycle, with each compilation and each testing execution. Such supply chain attacks may easily steal your intellectual property and tamper with your infrastructures.

And of course, if you distribute software to run on-premises, the supply chain risks your applications suffered also carry on to your customers, in addition to potential vulnerabilities introduced by e.g. legacy code. Reputational and legal costs could ensue...

Limit the attack surface exposed to vulnerable or malicious code
with in-application component isolation and privilege reduction.

Complex software almost inevitably include vulnerabilities; even established and actively-maintained open-source projects. Legacy code and dependencies may carry invisible, lingering vulnerabilities throughout your infrastructures.

Malicious actors have leveraged, and will keep leveraging vulnerabilities and undue trust in the supply chain for the purpose of extorsion, IP theft and overall destabilization.

Internalizing each of your dependencies, to fully and systematically review and rewrite legacy and unmaintained code, would be too costly for most organizations, and ultimately insufficient.

Our pragmatic approach is rather to contain the risks and threats, by limiting the privileges of each software component to its bare minimum.

from calicarpa import sandbox
sandbox.load("/path/to/configuration/file")
import foo
# foo runs with a restricted view of the system:
# - limited/rearranged view of the file system,
# - limited/translated/deactivated networking,
# - bounded resource consumptions (CPU, RAM, etc),
# - restricted syscalls (limiting kernel attack surface),
# - and isolation from other processes.
# foo may have limited access to other loaded Python modules,
# and these other modules may access foo's functions and data
# as specified by the sandboxing configuration.
foo.bar().baz() # running within foo's isolated system view
Leveraging Python ubiquity and standardized interfaces

Our solution comes as a single library file, designed to be easily added to (and removed from) existing Python packages, be it your own code or dependencies.

This is possible thanks to the versatile and reflective Python data model, along with a well-defined, extensive and extensible-by-design standard library, covering e.g. data serialization and core import mechanisms.

In-application isolation without code change

Common container solutions do not isolate individual components. A single compromised dependency thus compromises the entire application. Our solution runs different components isolated in different sandboxes, transparently re-interconnecting these components from across their respective sandboxes.

Beyond isolating parts of your own software, our solution can also isolate internal components of large dependencies, e.g. machine learning frameworks.

Native Linux sandboxing made easy

Building upon the security primitives and administration interfaces of the Linux kernel (namespaces, secure computing mode, control groups, etc), our library offers native sandboxing capabilities with a straightforward Python API.

This means not only Python scripts, but any library and application can be sandboxed. This capability opens up many use cases; see below some examples.

Vulnerable package example

Let's consider a generic, network-facing HTTP service connected to a database. This service logs every HTTP request but, reminiscent of CVE-2021-44228, the logger is vulnerable to a format string attack ultimately offering remote, arbitrary code execution.

An attack payload would execute in the logger. Without protection, an attacker would then inherits all of the application privileges. In this example: database tampering, network exfiltration, etc.

Such an attack would be thwarted if the logger only had minimum privileges instead, e.g. append-only to an already open file/socket.

def log(request):
# Vulnerable code here,
# e.g. format string attack
# reminiscent of Log4Shell.
def is_allowed(request):
# Check write permission for the request
def update(request):
# Discard unauthorized requests
if not is_allowed(request):
raise PermissionError
# Application-dependent processing
import database
import logger
# Called upon HTTP POST request
def process_http_post(request):
logger.log(request)
database.update(request)
network.py database.py logger.py
network.py database.py logger.py
Malicious dependency example

Unlike vulnerable dependencies, malicious ones execute malware as soon as they are loaded, without requiring a subsequent trigger. The main implication is that malicious code will also be executed in development environments.

While supply chain attacks affect every (software) industry, machine learning and data analysis may be among the most exposed ones. Factors include:

  • documentations and resources often hardly raise security awareness,
  • model, dataset and code sharing are widespread and appear harmless.

Did you know machine learning datasets are not "just data" in practice?

# Malicious code
# Anything goes here
class MyModel:
# Model implementation
# Not relevant for this example
import model
import dataset
# Malicious code already ran
# at this point!
def main():
# Model training loop
# skipped for brevity
Interested in an early access?

If you also believe in our approach to software security, we should get in touch. Let us recontact you for an early access, and talk benefits for both you and us.

About Us
Chief Executive Officer
Lê Nguyên Hoang

After graduating from Polytechnique (X07), he earned a PhD in mathematics from Polytechnique Montréal (game theory), receiving the best thesis award from his department. He then pursued postdoctoral research at MIT.

Lê published security research, while also becoming a prominent and successful science communicator (230k+ YouTube subscribers). He authored many books and is a sought-after public speaker (TEDx, AMLD, Devoxx, etc). Lê also co-founded the collaborative Tournesol platform.

Chief Technology Officer
Sébastien Rouault

After graduating from Centrale-Supélec and EPFL (MSc), he obtained his PhD in machine learning security from EPFL, designing and implementing today's state-of-the-art algorithms in the field.

Sébastien has over 10 years of top-level software development, from low-level, high-performance software engineering at Cisco Systems to multi-level (both Python and C++/CUDA) secure machine learning implementations. His work received distinctions for its quality and reliability, such as two ACM reproducibility badges.

Strategist and Scientist
El Mahdi El Mhamdi

A graduate of Polytechnique (X07), he pioneered the field of secure distributed machine learning during his PhD at EPFL. His ground-breaking algorithms (Krum, Bulyan, etc) and the formalisms behind them are now used, among others, by Google, IBM, Tencent (WeBank).

Mahdi is a Professor at Polytechnique, where he carries research on reliable machine learning and teaches advanced courses on the topic. Previously, he was a Senior Research Scientist at Google and received EPFL's thesis distinction.

You inquire about...
This section requires JavaScript.
Yourself
Name
Company
Position
The context
Please outline the use of AI/ML in your company
Please describe your issues or objectives
Our relation
Please share your expectations
Budget
Yourself
Name
Company
Position
The idea

Our main product is still in development. We can recontact you once fully completed. In the meantime, we are open to start building privileged relationships with a few partners.

The purpose is two-fold: to ensure our product fully suits your needs, and to try and challenge its security in real conditions, ideally against professional red teams.

If you want to explore this path with us, let's talk.

The product
When would you like to be recontacted?
Please share any question or observation you may have
Yourself
Name
Company
Position
Your inquiry
How may we help you?
You will be able to attach files in the next step.
Top