samedi 18 juillet 2020
mardi 7 juillet 2020
Ça change tout : transition numérique et transition énergétique
« Face aux innovations digitales, l’homme reste le maître du jeu », entretien avec Serge Abiteboul
Ça change tout, un podcast sur les enjeux de la transition énergétique, qui explore les bouleversements sociaux, technologiques, économiques et géopolitiques à l’heure du défi climatique.
Dans ce nouvel épisode, la journaliste Yolaine de la Bigne rencontre Serge Abiteboul, Directeur de recherche à l’Inria, membre de l’Académie des Sciences et de l’Arcep. Ce spécialiste des données et de l'informatique étudie la gestion d'informations et des données personnelles sur le web. Des sujets devenus essentiels face à l'accroissement des algorithmes dans notre quotidien.
Ensemble, ils font la lumière sur ces outils technologiques qui semblent aujourd’hui indispensables mais que nous ne comprenons pas vraiment. Serge Abiteboul nous éclaire sur l’impact du numérique sur la transition énergétique et nous montre en quoi l’internet peut être une clé pour appréhender l’intelligence collective et améliorer la circulation de l’information.
Un podcast proposé par EDF.
--
Direction éditoriale : EDF
Réalisation et production : HRCLS
Animation : Yolaine de la Bigne
Conseil éditorial et coordination : Havas Paris
Ça change tout, un podcast sur les enjeux de la transition énergétique, qui explore les bouleversements sociaux, technologiques, économiques et géopolitiques à l’heure du défi climatique.
Dans ce nouvel épisode, la journaliste Yolaine de la Bigne rencontre Serge Abiteboul, Directeur de recherche à l’Inria, membre de l’Académie des Sciences et de l’Arcep. Ce spécialiste des données et de l'informatique étudie la gestion d'informations et des données personnelles sur le web. Des sujets devenus essentiels face à l'accroissement des algorithmes dans notre quotidien.
Ensemble, ils font la lumière sur ces outils technologiques qui semblent aujourd’hui indispensables mais que nous ne comprenons pas vraiment. Serge Abiteboul nous éclaire sur l’impact du numérique sur la transition énergétique et nous montre en quoi l’internet peut être une clé pour appréhender l’intelligence collective et améliorer la circulation de l’information.
Un podcast proposé par EDF.
--
Direction éditoriale : EDF
Réalisation et production : HRCLS
Animation : Yolaine de la Bigne
Conseil éditorial et coordination : Havas Paris
lundi 6 juillet 2020
Table ronde du CNNum sur l'interopérabilité des réseaux sociaux le 6/07 à 14:00
- M. Serge Abiteboul, membre de l'ARCEP et directeur de recherche à l'Inria
- M. Lucas Verney, ingénieur à la Direction Générale des EnPtreprises (DGE)
- M. Dominique Hazael-Massieux, membre du W3C
- Mme. Cécilia Alvarez, Directrice EMEA Privacy Policy, Facebook
- M. Jean Gonié, Directeur Europe Public Policy, Snapchat
Du côté Conseil national du numérique, les participants :
- Mme. Salwa Toko, Présidente
- Mme. Annie Blandin, membre pilote de l'étude
- M. Henri Isaac, membre pilote de l'étude
- M. Charles-Pierre Astolfi, secrétaire général
- M. Vincent Toubiana, secrétaire général adjoint
- Mme. Myriam El Andaloussi, rapporteure.
- faciliter la vie des utilisateurs qui doivent passer d'un réseau à l'autre et recopier des données de l'un à l'autre.
- liberté de choix des utilisateurs: pas obligés d'aller chez les gros y retrouver leurs potes.
- améliorer la concurrence détruite par les réseaux sociaux systémiques et l'effet réseau
mercredi 1 juillet 2020
Our Social Networks, Our Regulation
Ceci est la traduction en anglais par Valeriya Tsekhanska d'un article : Nos réseaux sociaux, notre régulation[1] par Serge Abiteboul et Jean Cattan, paru dans Grand Continent
The networks we love to hate
The networks we love to hate
We love to
communicate. We love to share. We love to debate, exchange, sometimes
vigorously. The Internet has multiplied our natural abilities to do
so in countless forms. To respond to this
desire to communicate, a myriad of services has developed over the
years. Among them are what are called digital social networks – services that
allow us to expose aspects of our personality, a
profile of ourselves, and to be constantly in contact with everyone
else.[2] Once
upon a time, it was Myspace[3]. Nowadays, a wide variety of services have emerged in this
double dimension of mediation and communication of individuals, ranging from Facebook, Snap, or Twitter to
WT Social and Mastodon. Although not their primary role, other services
such as Wikipedia, Google Maps, YouTube or Jeuxvideo.com also allow
such exchanges.
Little by little, we
have acquired various profiles and identities. Every day, we build more
relationships - whether friendly or not -, out of shared interest, if not
out of love. We stay in touch, we post, we follow each other, we like,
we comment, we react, until we cannot take it anymore, and want to stop this
altogether. But, more often than not, we delve deeply into this exuberance
of social ties.
These services
attract more and more people by utilizing the network effect. Something interesting
happens in our social sphere, in a much wider circle, our “contacts” join
it, the information spreads virally, we feel the need to be
there too. We want to be there. We enrich our traditional
social ties with these services, we forge new ties as
well. This is where we often learn what
makes our community, our neighbourhood, our country react. The
information provided by the users themselves opens up new horizons,
from the most local to the most general, horizons that
are broader than those of traditional media.
The social networks
become an essential exchange facility, a new frontier of
our public space.[4] And
like every architecture, social networks are becoming
normative. From Baron Haussmann to Lawrence Lessig, whether real or
virtual, whether boulevards or computer code, the architecture is law, -
code is law.[5] Social
networks participate in defining
the content that we exchange. It becomes obvious
when we have to limit what we write to 280 characters, or when our messages almost
instantly disappear. We do not express ourselves in
the same way. Some art forms are born of these constraints, but not
only.
Gide said, “Art is
born of constraints, lives off struggles and dies of freedom”.[6] Social
networks, as we know them, give rise to conflicts between different
freedoms. They draw their essence from the freedom of expression and the
right to be heard for many people who have had no say before: the activist of a
suburban association, the adolescent of a lost
territory... They can die of conflicts with other
fundamental rights: how to reconcile freedom of expression with the right to be
protected against misinformation as well as the right not to be defamed,
insulted or harassed on these same networks?
Beyond the parameters that
directly define the content and formation of our messages,
most social networks also define what is massively read, seen or doomed to fall
into oblivion of cyberspace. They choose the content they will show
to you in order to maximise their profits. Someone would say “It’s the business
model, stupid!”[7]. Social
networks are not all dedicated to the good of humanity, it is
not their (main) objective. Inevitably, companies always seek profitability,
and social networks often aren’t exception. It is not judgment, but only an observation.
The social media’s
business model is essentially based on advertising. Whether it is
justified or not, such model encourages a heavy exploitation of our personal
data and an increasing profiling[8]. The
more the social networks know about us, the more profitable its advertising
space becomes.
However, to be more
profitable, advertisements should not only be targeted, but also must be
seen. So that we stay longer online, social networks promote content that raises
user engagement: makes us react, attracts us, and makes us stay. As a
result, we are more exposed to advertising, sponsored
content and the content that sponsors the platform. Our attention
has become a vast market. As Reed Hastings, co-founder and
CEO of Netflix, once said: “ we actually compete with sleep
(…) and we're winning!”[9], - a contemporary version of the “available
brain time” that once prevailed on television. What is true for audio-visual
services, is also true for the networks: we continue to spend more and
more time in front of our screens, and such commitment quickly turns into an addiction[10].
Like any “addict”, we
always need more. Whatever makes us react can be very extreme: starting
from a tiny kitten, and ending with something quite outrageous. Often
this is still not enough, and we are forced to invent stories, gossips,
and bring out conspiracies. We would get bored otherwise.
Fortunately, commitment
to noble causes, true feelings, humour, poetry and art on social
media saves us from all this. Some beautiful ideas and
intellectual innovations circulate there as well. They remind
us how infinite human creativity is. We love to see a crowd mobilise
for just causes. We admire the youth that appropriates struggles
through new means of communication, the ones that their ancestors failed to
grasp. We take advantage of this every day, and we are pleased. In
essence, every day we prove that social networks can be an instrument at
the service of our development, if not the liberation of individuals and
people, because, after all, isn’t it all about connecting people and
creating society?
However, there is a
risk: if we sit back and allow social networks to evolve on its
own, and if their dark side prevails, one day we would have to
pay the price for not being able to master the unbridled
expression of our own desires and impulses, abandoning the public
sphere to a business model that commodifies people.
The toxic content
that misuses the social networks comes in different forms: terrorism and child
pornography, threat messages and fake news, misinformation, cyberbullying,
and violation of privacy. Society is concerned and demands to take measures to fight
them. The Cambridge Analytica case was the first revealer of such phenomenon:
how the use of social networks, and in particular of political advertising on
them[11],
can influence a major collective decision. Our
exposure to hatred continues to grow, along with the fear that it may turn into
violence, which is unfortunately very tangible.
If we do not want to
scorn social networks, we must show that there is another way. This
requires education and engagement of citizen Internet users. Some organisations
offer alternative models. This is the case, for example, of WT
Social, - a social network that focuses on information, developed by Jimmy
Wales in 2019. The message is to draw inspiration from the
methods of funding, presentation and
moderation of Wikipedia to avoid the spread of fake news.
If
such initiatives are good to take, will they manage to preserve this energy
in social networks that makes us addicted? Perhaps, but it is very unlikely
that they alone will succeed in replacing the major platforms in
foreseeable future, as some of those platforms already have billions of users
and know perfectly well how to profit from the network effect and the unlimited
financial resources. It is safe to say that we will have to rely on some
form of state intervention to force these network giants, that we have nurtured
by our membership, to work for the good. Even if the market were to
provide solutions to this end, the spread of fake news or disseminating
hatred are not problems that can be left to the wisdom of the crowd
and the supply and demand relationship.
In our system of
representative democracy, institutions with a democratic
legitimacy have a say. However, the task of guaranteeing the
freedom of expression for everyone, while protecting people from excesses
of social networks is exceptionally difficult. There is a narrow margin,
for example, between thwarting disinformation and instituting state
information. Institutions must intervene in an open framework under the
watchful eye and control of a responsible society. In order to achieve the
necessary balance in every area: regional, national, cultural and
linguistic, it is essential to ensure the participation of all interested
communities, researchers, civil society, users and representatives of
the State. Everyone must be able to participate in identifying
the problems encountered by the platforms, in defining solutions and
in the monitoring the actions taken.
To say that
nothing is done with the announced propagation of fake news, hatred,
and personal data commodification, is wrong. Society and politicians have
taken up the subjects. An emergency has arisen as if these subjects were
new. Suggestions are made, decisions are taken on both sides: the side of
social networks themselves, and the side of public authorities. States react
and parliaments legislate, sometimes even under the pressure of
current events and emotions. This is done out of sequence, with no
overview, while trying to find solutions, often separately, to different facets
of the same problem. Indeed, the responses remain
inadequate.
In this ambient chaos,
main lines can still be identified. To describe the situation during
the 2018 edition of the Internet Governance Forum[12],
- the UN summit that was held that year in Paris, - Emmanuel Macron suggested three possible
paths, each of them corresponding to a model for regulating
social networks:
- The first way is
that of regulation by market players themselves, and self-regulation with
inductive intervention of state authorities. If this is the path that allowed
the creativity of the web, that allowed it to flourish, it has shown its limits
in terms of regulating social networks.
- The second way is that
of a direct control of content by the State and the
imposition of obligation to withdraw harmful content. This is the path
chosen by various legislative initiatives, including in France. In
addition to serving as a tool for undemocratic regimes, it has so far been
ineffective.
- The third way is
the one advocated for Europe by Emmanuel Macron at the
Internet Governance Forum, a path yet to be developed: “We
must build a new path through regulation, whereby States, Internet users,
civil societies, and all stakeholders are able to effectively regulate”. Concretely,
this path of regulatory supervision that we want to be open and democratic.
The third path is more
complex, but potentially more effective and more respectful of
everyone's rights. We believe that this will be the way to
combine many initiatives emerging from society as a whole, to generate new
ones. We also believe that it can be designed in Europe,
and serve as a model on a global scale.
The 1st way:
self-regulation and inductive intervention
What about published
content that can have a harmful effect? Both in the United
States and in Europe, it is subject to the social network’s limited liability
regime. It is this regime that has enabled the success of services whose
content is fed by users. For its opponents, this is a flawed regime to which we
owe the appearance of the worst excesses encountered on the web.
In 1996, such regime
was instituted in the United States by Section 230 of the Communications Decency
Act: “No provider or user of an interactive computer service` shall be treated
as the publisher or speaker of any information provided by another information
content provider”[13]. Due to the foregoing reasons, this
measure is dedicated to preserve the Internet expansion and
market freedom, as well as to enhance user control over their own online
activities. This includes imposing responsibility on parents to protect
their children against harmful content. The accountability
of service providers went hand in hand with the accountability of adults.
Considering that the
most powerful of the social networks that we use are based in the United
States, it is important to have in mind this limited liability regime that
informs the legal culture with which they are imbued: the principle is that
social networks are not responsible for content propagated on their platforms
if they do not have editorial control over it.
In 2001, the European
Union adopted eCommerce Directive 2000/31 and its Article 14. This Article,
which was added to the American regime, very relatively limits the
scope. In its transposition in France, Article 6
of the Law for Confidence in the Digital Economy of 21 June 2004
as interpreted by the Conseil constitutionnel, establishes the first bricks of the law on
responsibility of the “hosts”[14]. The host - being a social network, - is responsible
if it actually had knowledge of the illegal nature of the content, if it did
not act promptly to remove such data, or did not prohibit access to such
content once it came into its knowledge. That is to say that, concretely
and very summarily, a social network, wherever it is established in the world,
is obliged to remove manifestly illegal content of which it has become aware,
either because such content has been reported, or because it was characterised as
such by a judge.
Note
that these American and European texts were developed before
social networks got to be as we know them now; for example, Facebook came
into existence in 2004, YouTube - in 2005, Twitter - in 2006. Tensions
arise from differences in appreciation of the principle of freedom of
expression on both sides of the Atlantic: quasi-religious reading across the
Atlantic of the 1st Amendment to the American Constitution, more flexible
interpretation in Europe of Article 10 of the European Convention on Human
Rights, which may have limitations that are "necessary in a democratic society"[15].
Furthermore, in
Europe, and especially in France, the intervention of the judge remains
important. The Law For Confidence in the Digital Economy has opened the way to
a whole series of summary proceedings, by which a judge can urgently order
platforms to remove content. While deadlines are often considered too long by
complainants, the procedures and case law exist.
In this context, the
first question arises: what is illegal content? Between
clearly illegal and legal content there is a range of “grey” data
that must be assessed by the platform. The qualification of this
content is not simple for a social network’s moderator, and can
even divide magistrates[16].
To “frame” the
practices, a social network engages in the development of a
code that defines if the contents is acceptable by “community standards”. This
is not without raising a question of its legitimacy or authority as a
private actor to define what can or cannot not be accepted by the services,
admittedly private, but also defined as public spaces. The social
network then processes the content and decides if it corresponds to
its community standards. Different means are implemented in order to
do this.
Human resources -
which are extremely variable depending on platforms - are allocated to moderate
the content reported as potentially harmful by users of the platform, or
automatically detected as such.
Among other
questions, mass human moderation on social networks raises
a problem of the wellbeing of moderators. More and more studies, reports
and articles highlight the suffering of moderators, as they have to view
violent or pornographic content all the time, spending hours in the
trash of the web. People with few qualifications and often in distant
countries have to asses sometimes very complex legal questions, that may also require
in-depth knowledge of the culture or current affairs in the country where the
relevant events are taking place.
In fact, people are
fairly mediocre in reporting illegal or contrary to the network’s
standards content. Majority of reports are inaccurate and are made under
the influence of emotions. Despite their lack of knowledge and skills, moderators
do better, because they are trained and less directly involved. Nevertheless,
they also have their cultural and personal biases. Their task is not easy
especially for hate messages, and perhaps even harder for fake news.
An alternative
is to delegate to algorithms this work of moderation that is difficult for
humans to sustain. How a software can detect harmful
content? Typically, machine learning techniques are used. The
software is first trained on a sample of content annotated by humans
who have qualified them as acceptable, hate message, violence,
etc. Confronted with new content, the software will seek to
determine the qualification of messages which this content resembles the most.
Algorithms are
already widely used in the detection of illegal content, in particular with
regard to terrorism and child pornography. They would already be better
than human moderators, even if they still are far from perfect: the
problem is complex with the ambiguity of languages, irony ... And
they especially lack the context. Because of this, despite the progress
made in this direction, the ultimate decision to withdraw, or not,
content is taken by a human (with some limited exceptions like certain
terrorist content or child pornography). One day the question will arise
whether we can accept to delegate such decisions to a software. This question
deserves to be debated.
Social networks therefore
mobilise both human and technological resources to moderate content. In
addition to the encountered practical difficulties, the choice of
fully internalising the problems posed by social networks is questioned in
terms of the following principles:
i.
Is the relative absence of recourse to a judge in their procedure acceptable?
ii.
Is it desirable that they alone decide their content withdrawal
policy?
iii.
As the efforts vary considerably from one social network to another,
at what point a social network can be
considered sufficiently diligent to identify and remove
content? What should it measure? Against which standard to judge?
How to answer these questions staying within the fundamentals
of EU law?
Pending a real
substantive debate on these questions, the platforms remain in a flexible
legal environment and have a very wide margin of
self-regulation. This is what allows them to arrogate the
vocabulary of the State to set up, for example, “supreme courts”
which, like that envisaged by Facebook[17], would challenge the choices made by the
platform as to whether or not to remove content. The public space is
so remote on the platforms, and the space left vacant by
the state is so large, that the platforms can afford such regal behaviour.
Whether you are a
defender of freedom of expression or a representative of public order, it does
not work. The self-regulatory regime leaves far too much space for platforms. Depending
on their goodwill and their means, they will be either
excessively or insufficiently diligent[18]. Moreover, what is the legitimacy of
these private platforms to solve alone this variety of problems
with only a posteriori intervention of the authorities
acting on behalf of the people? What is their legitimacy to decide what
can and cannot be said in a public space, to define this public space?
In fine, it is a question of
defining a point of balance and of resolving questions that have major societal
importance: questions of memories, of relationship with the body, with discrimination,
between citizens and their representatives etc. It cannot be up to platforms
alone to define this point of balance. It is up to them to participate in its implementation,
yes, but it is up to the community to define it. This is why, at some point, an
intervention by the state and society is required. The question is how and to
what end?
The 2nd way: direct
control of content by the State
In addition to the
system established at the European level, different levels of response have
been put forward in the Member States to address the problems posed by the publication
and dissemination of content, whether it is manifestly illegal or located in a
grey area.
The first case
concerns the fight against disinformation. Briefly, the French law of
22 December 2018 relating to the manipulation of information opened the
possibility, during electoral periods, to seize the judge of summary
proceedings to decide, within forty-eight hours, on the withdrawal of
content that constitutes “inaccurate or misleading allegations or
accusations of a fact with the aim to alter the sincerity of the upcoming
election that are disseminated in a deliberate, artificial or automated and
massive manner”. In addition, the platforms must report on the
measures adopted to stem the spread of fake news, or provide
transparent information on the remuneration “received in return for the
promotion of [...] information content [ relating to a debate of
general interest]”.
Another case concerns
the treatment of hate speech. The initiative comes from Germany. Since 2018,
the so-called NetzDG law notably requires social networks to remove manifestly
hateful content in twenty-four hours, effectively referring the platform to assess
what it is. The law provides for a penalty of up to 50 million euros for the
platform’s failure to comply with this obligation. According to the latest
information, the German government wanted to further reinforce its system.
It is difficult
to say at present what will be the regime of the regulation of hate
content in France[19]. The
anti-hate law bill proposed by Laetitia Avia in March 2019 gave rise to strong
reactions. As it stands, with large lines, it imposes an obligation to remove
illegal content of different kinds within 24 hours. In the law passed by the
Assembly at second reading, a time limit of one hour was also established for
content notified by the authorities to any platform as terrorist in nature. The
sanction amounts could reach 4% of the company turnover. A blocking measure
targeting mirror content is also planned.
Among the most
resounding criticisms are those of the European Commission, recorded in a
notice of 22 November 2019. According to the Commission, the measures
disproportionately restrict the freedom to provide services; the reach of the
aim pursued - human dignity - on the identification of the targeted content
remains unclear; the proposed measures are not targeted and relate to excessively
broad range of online platforms; the notification conditions are not precise
enough; there is a risk of excessive deletion of content given the time limit
and the amount of the penalty; there is no guarantee against general monitoring
of content by platforms; and finally France should not legislate independently when
the Commission could legislate on this issue within the framework of the
Digital Services Act expected for 2020. It should be noted, however, that
despite the Commissions’ announcement that it wanted to act, no specific plan hitherto
has been published.
More generally, even though
we fully understand the need for public protection, we may question whether the
legislative initiatives like those pushed in France and Germany do not approach
the problem from the wrong end.
Paradoxically, by
increasing the amount of sanctions and strengthening the mechanisms of
repression to which social networks could be subject, governments risk
accentuating the power of these platforms. If no control is exercised over the
methods of propagation and withdrawal of content beyond the question of whether
content remains one or twenty-four hours on the platform, the State will remain
blind to the choices made by the platform, both in the way it marks certain
content and in its opt-out policy. Even more so as the small platforms - which
do not have the means to implement a device of suppression of the contents to
comply with the law - are condemned to completely depend on the large platforms
for moderation, to considerably limit the communication of the Net surfers, or
to be sanctioned.
To underline the
difficulties encountered, mention should be made of the case where content
notified by the judge and withdrawn reappears elsewhere in the same form or a
slightly altered form. This problem was first found in a judgment of 3 October
2019 by the Court of Justice of the European Union (aff. C-18/18) in a
defamation case on Facebook. It follows from this judgment that a court is
entitled to order a social network like Facebook to remove information
identical to that already withdrawn, but also information deemed “equivalent”.
Content is deemed equivalent by the Court if it remains “in substance
unchanged from that which gave rise to the finding of illegality, and
comprising the elements specified in the injunction, and that the differences
in the wording of this content equivalent to that characterizing the
information previously declared unlawful are not such as to compel the host to
carry out an independent assessment of this content ”.
The situation in
which the content deemed illegal reappears almost instantly elsewhere after a
potentially lengthy procedure was clearly problematic. However, as this
withdrawal order can be made “globally, within the framework of relevant
international law”, the room for manoeuvre and the impact of the decision on
social networks are potentially considerable. Does all content needs to be compared
with the removed content to determine if they are equivalent? Who can judge
this equivalence?
Also, by adopting
specific rules, the various European States introduce legal differences to the
detriment of the consistency of the internal market. In order to homogenise
everything, and even before the NetzDG law, the European Union adopted in 2016
a code of conduct, and in 2018 - a series of recommendations for platforms[20]. These recommendations define certain methods for
the notification of illegal content, to what extent hosts can take proactive
measures or use automated detection methods, what remedies are open to content
publishers, how to deal with the specific case of terrorist content,
etc. But all this remains very general and unrestrictive: Member States
need only to take due account of it.
By entrusting an
administrative authority with the task of assessing the diligence of a platform
in the removal of content, the State finally pushes aside all the rest of the
institutions, starting with justice and society, from defining what can be said
in a public space. Ultimately, the administration finds itself in a role of
direct control of the content by the sanction of the absence of withdrawal, and
thus our democracies promote (if it were necessary) a model which can only
satisfy much less democratic countries.
The phenomena we face
are frightfully complex. They play on our psychology, our individual and
collective dynamics in digital social networks, but also in real life. We must
open up the regulation of social networks to society as much as possible. The
supervision of social networks can open us up to this alternative, not so much
in that we would give specific missions to an administrative authority, but in
that it will allow society to seize the problem and provide real and human
solutions.
The 3rd way: agile
supervision regulation
During his speech at
the Internet Governance Forum in November 2018 where he announced the need to
follow the third path, Emmanuel Macron also announced a partnership between the
French State and Facebook, whose vocation was precisely to fuel reflection on
this third way.
For six months, ten
members of the French administration and three rapporteurs had access to
Facebook premises and staff to observe, exchange and question the modalities of
moderation of hate content by the social network. As part of this mission,
judges, experts in computer science, telecoms and regulation, representatives
of the police in charge of cybercrime and people responsible for the fight
against racism and discrimination rubbed shoulders. The question put to them
was the regulation of hate messages on Facebook in France. The goal was to come
up with solutions that can extend to (i) other platforms, (ii) other types of
content such as fake news, and (iii) the European framework.
During this mission,
Facebook thus allowed French civil servants to study the functioning of its
moderation, and in particular, to visit some of its moderation centres in
Europe. Due to the time constraint, during the three months of in situ
observation, the members of the group did not have access to the computer code
of the social network. On the other hand, they had the opportunity to be
exposed to main algorithms underlying this code. If they were able to speak
with engineers and moderators of the platform, this was done in the presence of
company officials. If the discussions were direct and if the controversial
subjects were not avoided, the members of the group were exposed to the reality
that Facebook wanted to present them,
which is of course only a part of reality.
From this mission, a
report was produced outlining several elements of findings and proposals[21].
Firstly, the report
indicates that Facebook is determined to seriously tackle the problems
encountered by social networks, and that it invests in this direction
significant human and technical capital. This is, of course, to be contrasted
with the smaller efforts of other platforms that have neither the means, nor
perhaps the urge to confront these real problems. The report also notes that,
even with good will and resources, the social network is struggling to solve
the problems its services pose to society.
A crucial finding is
that social networks play a role of “de facto editorialisation”. By structuring
the content published by their users, by choosing which to “promote”, they go
beyond the role of mere information host. The algorithms defined by social
network designers accelerate the spread of certain content, feature it, and
make it visible to a massive number of people[22]. The exposure of the public to problematic
content is rooted in these choices. The non-exposure (or a lesser
exposure) must therefore also be the result of the social
network’s editorial choices. This observation is the starting point of any
approach to regulating social networks that aims to be effective.
As a result, the
authors of the report suggest that the social network should “internalise” the
objectives assigned to it by the public authorities in the fight against, for
example, hateful content. The adopted angle is different from that adopted by
the legislator until now. It would consist not of deciding what is to be
deleted, but of ensuring that social networks put in place necessary measures
for the non-propagation of hate content according to criteria set not only internally
by the social network.
These objectives and
criteria vary with the cultures, languages and societal themes playing out in
each State, and must take due account of the diversity of European States and
peoples. The implementation of regulation can, therefore, only be done
effectively at a national level. To this end, the report advocates replacing
the logic of the “destination” State (the State of victim’s residence) for that
of the State of the “installation” of a platform (often Ireland). That said,
according to this model, the regulators must coordinate through a European
regulator, which would have sufficient power to dialogue from a position of
strength with major social networks, would be able to define the rules of the
game and curtail excesses of national regulators under pressure of national
events.
The regulation should
then be built around the empowerment of social networks, which are themselves
held to a “duty of diligence” vis-à-vis their users: in this way “social
networks would undertake to assume a responsibility vis-à-vis their users
regarding abuses by other members, and attempts to manipulate the platform by
third parties.”
An important aspect
of the report is that not all platforms are subject to the same regime. Only
the most important - the “systemic platforms” - are subject to such ex ante
regulation, to avoid the regulation limiting the emergence of new innovative
companies. As proposed by the report, it could be conceived that platforms
whose number of users reaches 10-20% of the population of a Member State would
be considered as such.
Medium-sized
platforms are not a priori affected. If one of them appears to pose particular
problems or to have particularly negative effects, it could be promoted in the
big league and fall under the influence of the regulator. It must be emphasized
that small and medium-sized platforms that are generally not subject to this
regime are not exempt from all moderation: they are obviously still bound to
apply the laws.
At the national
level, the regulator must have the means to assess the results of the measures taken
by the social networks it regulates. To this end, social networks are subject
to very high transparency standard. This transparency is essential, because it
alone allows a serious and meaningful evaluation of the moderation work. As
such, platforms must, for example, provide information on their moderation
modalities, their statistics related to moderation, including false positives
and false negatives, - content unjustly withdrawn or wrongly authorised.
Transparency also concerns the content scheduling functions: what content is promoted,
why, with or without remuneration?
Platforms should also
facilitate the reporting of problematic content. Internet users concerned by this
content, whether they have published or reported it, are notified of the
results of the procedures, and may appeal the decision.
In short, an “agile” national
regulator is interested in global dynamics without focusing on specific cases,
which can be escalated to be resolved by a judiciary. The focus is shifted from
an obligation of result to an obligation of means, even if, as a last resort,
the obligation of result persists within the framework of the law. To ensure
compliance with the public objectives set for social networks, the regulator may
impose heavy fines if they do not fulfil the obligation to take measures
imposed on them.
The most serious
challenge that any moderation faces is its acceptance by society: if it does
not do enough - it is an accomplice of Satan, if it does too much - it is
censorship. This is a pitfall on which, we believe, the self-regulation and
direct moderation of the state have failed. Supervision regulation consisting
of a tête-à-tête between the state and the social network would not be immune
to rejection by society. The report suggests social networks enter into an
informed political dialogue with all stakeholders: the regulator, of course,
but also the government and its services, the legislator, the justice system,
and civil society (in particular, associations and research labs). All of them
participate in the definition of objectives, in evaluation, in monitoring
appeals, in the construction of learning databases.
The national
regulator in charge of supervising social networks would have the task of ensuring
the accessibility to the outside world, of organising debates around the
definition of objectives, and of involving the whole of society in the
supervision process. It plays a central role in sharing information that
describes the services of social networks, in particular those that explain
their algorithmic choices. Finally, based on the objectives set by the
political power, the regulator would be responsible for the resolution of both general
and specific problems, and for the diligent settlement of disputes.
The whole structure remains
in a delicate balance. Only judges can decide on the legality of the content.
The regulator should oversee the operation of systemic platforms. The system
derives its effectiveness from the complementarity of their roles.
Conclusion: Supervision, an open door to society
Society is already
aware of the problems posed by social networks. This led social networks to take
action to attempt “calming down”, while staying within the framework of the
first path described. These issues have also taken up political agendas.
Measures taken within the second path, such as the NetzDG law, helped systemic
platforms realise that their business could not continue without profound
changes.
The appetite of
Internet users for social networks shows that these networks deserve to be
saved. In this article, we have insisted on supervision regulation, as a way to
address the problem without harming the essential contributions of social
networks to society, to allow everyone to express themselves, to stay informed,
to share and communicate with others, -in short, the 3rd path. However, this is
only one side of the problem.
Education.
We also need to learn to use social networks, to treat each other well and to
learn to respect others. This places education at the heart of the system.
Today, Article L. 312-15 of the French Education Code provides, for example,
that “moral and civic education aims in particular to encourage students to
become responsible and free citizens, to develop critical thinking and to adopt
a thoughtful behaviour, including in their use of the Internet and online
public communication services.” Critical thinking can be developed, for
example, by encouraging the editing and moderation of online content, such as
on Wikipedia, for example. Social networks are the fruit of scientific and
technical revolutions. Such teaching must therefore also take this dimension
into account. In particular, computer education is essential to understand how
these networks operate. If we are to be masters of our environment, we must
understand what it is made of. This education in critical and algorithmic
thinking in the age of social media cannot be limited to a young audience. It
must cover all age groups. All of us are concerned!
Data and algorithms. The algorithms[23]
for detecting content that needs to be discarded are based on annotated data compilation.
The algorithm “learns” to separate the wheat from the chaff using this data. This
data compilation allows to distinguish between a true information and
misinformation, a hate message or just a little caustic text, etc. They must,
therefore, be placed at the service of the whole society, and not only be used
by large platforms that have the means to build such programmes. Small
businesses also must have access to it, otherwise there is a risk of
strengthening oligopolies. This calls for these compilations to be considered a
common good, - “data of general interest”. Obviously, data sharing must be
carried out with respect for the protection of privacy and business secrecy,
possibly after anonymisation and/or consolidation. The world of research and
civil society have their place in the constitution of these data sets, which must
be produced in a permanent agreement between all the parties concerned. Beyond
the question of data, researchers should be encouraged to explore new ways for
algorithmically detecting harmful content. For example, work on cyberbullying
has shown that these situations can sometimes be used more effectively by
observing the graphs specific to attacks (clusters going towards a person),
rather than by relying on the analysis of the actual words[24].
Citizen engagement. Violent extremism of
all kinds has invaded social networks. Organisations have chosen to fight them
on the same ground. This is the work carried out by Moonshot CVE which manages
to establish a demographic and geographic analysis of the audience using data
from social networks.[25] By identifying that
in such a region, such a public is more prone to utter hate speech online,
social action becomes easier. In the United States, together with the
Anti-Defamation League and the GenNext Foundation, Moonshot CVE has launched a
program called Redirect method[26]
to combat propaganda by ISIS and the White supremacists. People doing specific
research denoting one of these two trends are redirected from Google Ads or
YouTube videos to organisations and content that are able to deconstruct
propaganda discourse. Because, as associations note, once the content has been
removed, the vulnerable person who looks for such content will always be left
in its position. Allowing them to get in touch with certain people or see
certain content can cause them to transform.
In such approaches,
engagement goes beyond just the social network alone and involves third-party
organisations. The steps and knowledge required are far too numerous to be
internalised by a social network. These tasks must be a responsibility of
specialised organisations, actors on the ground like Life After Hate, an
organisation of former hate-mongers at the service of the fight against hate[27].
The social network
regulator. Supervision is based on transparency, and therefore on understanding how
social networks work. This makes strong participation of the whole society possible,
that can induce the economic actor to open the door to “regulation by society” (Paula Forteza)[28]. Supervision
becomes not only the responsibility of an authority, but of society as a whole,
which can take action to deploy the most appropriate remedies. A role of the
regulator is to mobilise and empower the whole of society. To this end, the
regulator must be a tool for dialogue between government and its departments,
judiciary, researchers from all disciplines, associations and Internet users.
To transform social
networks, the regulator must adapt to its purpose and rely on the forces of social
networks in order to ultimately become a social network itself.
[1] This paper does not
represent an official view of any institution or person other than its authors.
[2] On sociology,
history and typology of social networks, see in particular the works of Pierre
Mercklé, notably La sociologie des réseaux sociaux, La
Découverte, 2016 and La découverte des réseaux sociaux. A propos de
John A. Barnes et d’une expérience de traduction collaborative ouverte en
sciences sociales, in Réseaux, 2013/6, n ° 182, pp. 187 and s.
[3] The social media
timeline dating back to the 1970s is available on Wikipedia: https://en.wikipedia.org/wiki/Timeline_of_social_media . See also V.
Schafer, Les réseaux sociaux numériques d’avant, in Le temps des médias,
2018/2, n ° 31, pp. 121 and s.
[4] On social networks, their architecture and their
influence on democracy, see also Amaelle Guiton, Réseaux sociaux :
ont-ils enterré le débat public?, in Revue Projet, 2019/4, n
° 371, pp. 26 and s.
[5] On the parallel between computer code and Parisian architecture under
Napoleon III, see Lawrence Lessig, Code. Version
2.0, Basic Books, 2006, p. 127.
[7] The expression is borrowed from the title of a workshop entitled It's
the business model, stupid! Targeted advertising and human rights organised
at Rightscon 2019 in Tunis on June 13, 2019.
[8] The
fact of requiring a lot of data to better target the public receiving an
advertisement is not obvious and must in any case register in accordance with
the principle of data minimisation established by Article 5 of the General Data
Protection Regulation (2016/679).
[9] The quote of Netflix’ president, Reed Hastings, taken from Rina
Raphaël, Netflix CEO Reed Hastings: Sleep Is Our Competition,
Fastcompany.com, June 11, 2017. The available brain time is an expression of
Patrick Le Lay, former CEO of TF1 back in 2004.
[10] The Statista.com site reports that a world average time spent on social
networks increased from 90 minutes in 2012 to 136 minutes in 2018.
[11] The data used for the Cambridge Analytica case is taken from Alex
Hern, Cambridge Analytica: how did it turn clicks into votes? Theguardian.com,
May 6, 2018. For examples of advertisements developed according to the types of
personalities defined see Jeremy B. Merrill and Olivia Goldhill, These
are the political ads Cambridge Analytica designed for you, Qz.com, January
10, 2020.
[12] The speech by Emmanuel Macron at the Internet Governance Forum is
available in full via Elysee.fr.
[14] For the decision of the Conseil
constitutionnel on the LCEN, see Recital 9 of the decision
n°2004-496 of June 10, 2004.
[15] For an illustration of a judgment of the European Court of Human Rights
on freedom of expression, see ECHR, Handyside v. the United
Kingdom , judgment of December 7, 1976.
[16] On the case of moderators, in addition to the numerous articles and
dedicated reports, see the work of Sarah T. Roberts, Behind the screen,
Yale University Press, 2019 and articles by Casey Newton for The Verge,
including Bodies in seats of June 19, 2019 available
on Theverge.com.
[17] On Facebook's “cour suprême ”, see
among others Le Monde and AFP, « Bientôt une « cour
suprême » de Facebook, pour statuer sur les publications supprimées », Lemonde.fr, January 25, 2020; and Brent Harris, Preparing the
Way Forward for Facebook's Oversight Board, about.fb.com, January 28, 2020
for all information related to its operation.
[18] On the power and difficulties of moderation with software, see, for
example, Ex Machina: Personal Attacks Seen at Scale, Ellery
Wulczyn, Nithum Thain, and Lucas Dixon, WWW '17: Proceedings of the 26th
International Conference on World Wide Web.
[19] On France's
position, see the tribune of seven French Ministers: « Mettre fin à l’impunité » on the Web : sept ministres s’engagent à lutter contre la haine en
ligne », Lemonde.fr, June 18, 2019.
[20] The European Commission’s code of conduct of May 2016 on illegal online
hate speech is available on the ec.europa.eu website. See also the
recommendation of the Commission of March 1, 2018 on measures to effectively fight
against illegal online content, C (2018) 1177 final.
[21] The report of the mission “Regulation of social networks – Facebook
experiment” published in May 2019 is available via numerique.gouv.fr.
[22] On a similar approach that consists in considering that the main problem
lies in the organization of online content, see H. Murphy and M. Murgia, Can
Facebook really rely on artificial intelligence to spot abuse , Ft.com,
November 8, 2019 and, in particular, the conclusion by Ms. Sasha Havlicek of
the Institute for Strategic Dialogue: “If you don't address the underlying
tech architecture that amplifies extremism through the algorithmic design, then
there is no way to outcompete this”.
[23] S. Abiteboul, G. Dowek, The
Age of Algorithms, Cambridge University Press (translated from Le temps des algorithmes, Le Pommier),
2020.
[24] On the use of
digital data, see S. Abiteboul and V. Peugeot, Terra Data, Qu'allons-nous faire des données numériques?,
Le Pommier, 2017.
[25] The Moonshotcve.com site presents the organisation’s
work on combating online violence, including all of the work of mapping
extremist speech.
[26] Information
relating to the redirect method is available in particular on
redirectmethod.org and on the ADL website.
[27] Just like Life After Hate, other organizations also rely on
connecting with former « extrémistes violents », in particular to define the most
suitable responses to an extreme online speech. As such, see the Against
Violent Extremism program of the Institute for Strategic
Dialogue.
[28] On the regulation
by society, see Claire Legros, Paula Forteza: « Les citoyens doivent participer à la régulation des plates-formes
numériques », Lemonde.fr, November 19, 2018.
Inscription à :
Articles (Atom)