Skip to content
logo
An image of a cartoon-style scale that is balancing a set of books on one end and a microchip that says "AI" on the other. The books are heavier than the microchip, suggesting that ethics and safety will play a significant role in the development of AI

Safety and Ethics in AI - Meltwater’s Approach


Giorgio Orsi

Aug 16, 2023

AI is transforming our world, offering us amazing new capabilities such as automated content creation and data analysis, and personalized AI assistants. While this technology brings unprecedented opportunities, it also poses significant safety concerns that must be addressed to ensure its reliable and equitable use.

At Meltwater, we believe that understanding and tackling these AI safety challenges is crucial for the responsible advancement of this transformative technology.

The main concerns for AI safety revolve around how we make these systems reliable, ethical, and beneficial to all. This stems from the possibility of AI systems causing unintended harm, making decisions that are not aligned with human values, being used maliciously, or becoming so powerful that they become uncontrollable.

Table of Contents

Robustness

AI robustness refers to its ability to consistently perform well even under changing or unexpected conditions. 

If an AI model isn't robust, it may easily fail or provide inaccurate results when exposed to new data or scenarios outside of the samples it was trained on. A core aspect of AI safety, therefore, is creating robust models that can maintain high-performance levels across diverse conditions.

At Meltwater, we tackle AI robustness both at the training and inference stages. Multiple techniques like adversarial training, uncertainty quantification, and federated learning are employed to improve the resilience of AI systems in uncertain or adversarial situations.

Alignment

In this context, “alignment” refers to the process of ensuring AI systems’ goals and decisions are in sync with human values, a concept known as value alignment.

Misaligned AI could make decisions that humans find undesirable or harmful, despite being optimal according to the system's learning parameters. To achieve safe AI, researchers are working on systems that understand and respect human values throughout their decision-making processes, even as they learn and evolve.

Building value-aligned AI systems requires continuous interaction and feedback from humans. Meltwater makes extensive use of Human In The Loop (HITL) techniques, incorporating human feedback at different stages of our AI development workflows, including online monitoring of model performance.

Techniques such as inverse reinforcement learning, cooperative inverse reinforcement learning, and assistance games are being adopted to learn and respect human values and preferences. We also leverage aggregation and social choice theory to handle conflicting values among different humans.

Bias and Fairness

One critical issue with AI is its potential to amplify existing biases, leading to unfair outcomes.

Bias in AI can result from various factors, including (but not limited to) the data used to train the systems, the design of the algorithms, or the context in which they're applied. If an AI system is trained on historical data that contain biased decisions, the system could inadvertently perpetuate these biases.

An example is job selection AI which may unfairly favor a particular gender because it was trained on past hiring decisions that were biased. Addressing fairness means making deliberate efforts to minimize bias in AI, thus ensuring it treats all individuals and groups equitably.

Meltwater performs bias analysis on all of our training datasets, both in-house and open source, and adversarially prompts all Large Language Models (LLMs) to identify bias. We make extensive use of Behavioral Testing to identify systemic issues in our sentiment models, and we enforce the strictest content moderation settings on all LLMs used by our AI assistants. Multiple statistical and computational fairness definitions, including (but not limited to) demographic parity, equal opportunity, and individual fairness, are being leveraged to minimize the impact of AI bias in our products.

Interpretability

Transparency in AI, often referred to as interpretability or explainability, is a crucial safety consideration. It involves the ability to understand and explain how AI systems make decisions.

Without interpretability, an AI system's recommendations can seem like a black box, making it difficult to detect, diagnose, and correct errors or biases. Consequently, fostering interpretability in AI systems enhances accountability, improves user trust, and promotes safer use of AI. Meltwater adopts standard techniques, like LIME and SHAP, to understand the underlying behaviors of our AI systems and make them more transparent.

Drift

AI drift, or concept drift, refers to the change in input data patterns over time. This change could lead to a decline in the AI model's performance, impacting the reliability and safety of its predictions or recommendations.

Detecting and managing drift is crucial to maintaining the safety and robustness of AI systems in a dynamic world. Effective handling of drift requires continuous monitoring of the system’s performance and updating the model as and when necessary.

Meltwater monitors distributions of the inferences made by our AI models in real time in order to detect model drift and emerging data quality issues.

The Path Ahead for AI Safety

AI safety is a multifaceted challenge requiring the collective effort of researchers, AI developers, policymakers, and society at large. 

As a company, we must contribute to creating a culture where AI safety is prioritized. This includes setting industry-wide safety norms, fostering a culture of openness and accountability, and a steadfast commitment to using AI to augment our capabilities in a manner aligned with Meltwater's most deeply held values. 

With this ongoing commitment comes responsibility, and Meltwater's AI teams have established a set of Meltwater Ethical AI Principles inspired by those from Google and the OECD. These principles form the basis for how Meltwater conducts research and development in Artificial Intelligence, Machine Learning, and Data Science.

  1. Benefit society whenever opportunities arise in inclusive and sustainable ways.
  2. Bias and drifts are defects. They fail the business and our customers.
  3. Safety, privacy, and security as first-class citizens.
  4. Trace everything and be accountable. Transparency is key.
  5. We are scientists and engineers; everything must be proven and tested.
  6. Use open source whenever possible; vet everything else and assume it is unsafe.

Meltwater has established partnerships and memberships to further strengthen its commitment to fostering ethical AI practices. 

We are extremely proud of how far Meltwater has come in delivering ethical AI to customers. We believe Meltwater is poised to continue providing breakthrough innovations to streamline the intelligence journey in the future and are excited to continue to take a leadership role in responsibly championing our principles in AI development, fostering continued transparency, which leads to greater trust among customers.