Do Algorithms Make the Law? 4 Key Takeaways

Do Algorithms Make the Law? 4 Key Takeaways

Home Blog › Do Algorithms Make the Law? 4 Key Takeaways

Behind the scenes, algorithms are taking on an increasingly prominent role in our lives. Calls for regulation are growing – but what exactly are we talking about? And what are the implications for our legal systems?

Insights with Aurélie Jean, entrepreneur and data scientist, and Jean-Marc Meilleur, Partner at Gosselin & de Walque and former Prosecutor.

Their conversation at BeCentral on November 9th was visualized by the cartoonbase team.

picture this cartoonbase event debate aurélie jean jean-marc meilleur

To make the experience interactive, memorable, and accessible, we’ve developed the brand-new conference format called “Picture This.” Our illustrators created live drawings based on the input from our guests, adding to the pre-imagined visuals (check them out below!).

Here are the 4 key points from the discussion:

1. We need to harness algorithms, not regulate them.

Should Facebook publish its algorithms? No, says Aurélie Jean, and this is for several reasons.

Mandating the publication of algorithms for all players in the European market, for example, could potentially hinder the deployment of many tools without necessarily providing the desired transparency.

When an algorithm is trained on a dataset, simply publishing the code will not be sufficient to reveal its entire logic, as it is implicitly constructed through this learning process.

Such legislation could also favor larger players, making it easier for them to capitalize on a startup’s innovation.


Aurélie Jean advocates for a focus on explainability rather than transparency – making the algorithm’s operation understandable for everyone.

As for regulatory efforts, Jean-Marc Meilleur agrees with Aurélie Jean, emphasizing that they should concentrate on measures that promote this explainability.

Regulating development practices, particularly in terms of testing, creates a framework that can detect algorithmic biases before their large-scale deployment.

This is because it’s not the algorithm that is inherently biased but rather the nature of the data on which it’s trained.

Therefore, taming algorithms means enforcing more explainability by regulating practices rather than the algorithms themselves.


2. Let’s combat opacity to demystify the issue of responsibility.

When a screening exam fails to detect a tumor, who is at fault? The doctor, the technician, or the company that developed the scanner?

The example presented by Aurélie Jean illustrates that the complex issue of responsibility, which is often at the center of debates about algorithms, is neither new nor specifically related to the “black box” at the core of digital tools.


However, it is closely related to the issue of opacity: for a judge, for example, to rule that developer negligence caused harm, they must be able to examine and understand the functioning of the algorithm they developed.

Once again, explainability plays a crucial role here.

Reducing opacity by improving explainability will help in determining responsibilities more effectively.

As our guests remind us, users also have a share of responsibility. While regulation can contribute to making social networks safer for minors, education can play an equally important role.


3. An algorithm can never dispense justice on its own.

We’ve seen the impact that the law can have on algorithms, but what about the impact of algorithms on the law? Are we heading towards a society where algorithms dispense justice?

In one word: no. While algorithms can help improve the framework within which justice operates – for example, by preventing the creation of “filter bubbles” on social media – automating the judicial system would be a dangerous abuse.

As Jean-Marc Meilleur reminds us, justice is a human endeavor: a judge must always be able to justify their decision, which cannot be dictated by an algorithm.

In fact, profiling judges based on their decisions is prohibited in France to prevent potential abuses.


Aurélie Jean highlights a major issue with jurisprudence: since an algorithm would be trained based on previously rendered judgments, algorithmic justice would make it impossible for the development of legal precedents, which are essential for evolving the law.


4. In the next 30 years, algorithms could potentially save lives.

What aspects of our lives will be transformed by algorithms by 2050? And should we accelerate or, on the contrary, prevent these transformations?

Algorithms hold great promise in the medical field, as explained by Aurélie Jean. Predictive medicine has the potential to save lives, such as by detecting cancers before the appearance of the first symptoms.


However, she warns against the interference of these technological tools in our romantic relationships and, more broadly, in human interactions.

The challenge, as Jean-Marc Meilleur summarizes, is to create a regulatory framework that promotes the development of these benefits while preventing abuses.

He points out that the legal framework in Belgium is lagging behind the rapid evolution of technologies, depriving the justice system of the necessary tools to handle this substantial task.

Our guests agree that multilateral agreements are a crucial aspect in this context.


Sign up for our newsletter to receive an invitation to our upcoming “Picture This” events!

picture this cartoonbase illustration algorithmes aurélie jean