It’s old hat by now that the electoral process has proven to be especially vulnerable to certain interest groups’ desire to wedge their way into social platforms and manipulate the spread potentially harmful information. Platforms themselves are divided on how they manage political content; Facebook has decided to take an entirely hands off approach, while Twitter has announced a ban on all political advertising.
Both positions have their detractors – the unequivocal refusal to regulate and thus tacitly condoning the spread of contentious content, versus the decision to become the arbiter of political content in your feed. Following the Twitter ban, the Taoiseach voiced his reservations regarding the disabling of a significant channel for political representatives to reach voters. He also expressed his concern that such a ban could act as a contagion for a ban on political advertising across all media, including billboards and newspapers.
While such a view has more than a whiff of scaremongering, it reflects the inherent tension in regulating political content online. Apart from a consensus that ‘something must be done’ there is very little agreement on where to draw the line, nor a sense of how easy it will be to police once drawn. Watch this space.
The General Data Protection Regulation (GDPR) restricts how personal data may be processed by a data controller. In particular, personal data may not be used for a purpose incompatible with the purpose for which the data was initially collected.
One exception to this is Section 41(b) of the Data Protection Act 2018. This allows a data controller operating in Ireland to disclose personal data to a third party to the extent that this is “necessary and proportionate for the purposes of preventing, detecting, investigating or prosecuting criminal offences.“
Typically, this arises following a request from An Garda Síochána or another law enforcement body for disclosure of information containing personal data. Such requests are common in sectors such as financial services and insurance, although it is up to each firm to handle each request in a compliant manner.
However, data controllers in other sectors might be alarmed to receive a request from the Gardaí seeking disclosure of information under Section 41(b). This might be for various information relating to named individuals or for a copy of CCTV footage. The request might be marked urgent, refer to serious criminal allegations, or be broad or exploratory in nature. You might feel under pressure to comply with the request.
The most important thing to know is that you are under no obligation to comply with a request if it is made under Section 41(b) of the Data Protection Act 2018. It is completely voluntary. However, the catch is that if you choose to comply in full or in part, you bear the risk as the data controller. This means being satisfied that disclosing the personal data is necessary and proportionate for the purpose of preventing, detecting, investigating or prosecuting criminal offences. This places a high burden on you as the data controller, including keeping appropriate records to justify your decision and to demonstrate accountability under GDPR. You also have other obligations, including transparency to data subjects, data minimisation, facilitating data subject rights, and ensuring appropriate data security.
If information concerning particular individuals or video footage is that important for a criminal investigation, the Gardaí can (and often will) get a District Court order or even a search warrant. And if this is served on you, there will be a legal obligation to provide the information, and you will have protection as a result. In many instances, this may be preferable to complying voluntarily with a request made under Section 41(b) and taking on the risk and potential liability of getting it wrong.
And if you choose not to comply with a disclosure request that is made under Section 41(b), which you are entitled to do in the absence of any other legal or statutory obligation, bear in mind that the communication received will likely contain sensitive or confidential information which should not be retained unless there is a specific reason to do so.
Earlier this month Minister for Health, Simon Harris, launched a Vaccine Alliance – a network of healthcare professionals, policy makers, patient advocates, students and representatives from groups most affected by vaccine hesitancy – to boost the uptake rate of childhood vaccines. Recently four European countries, including the UK, lost their measles-free status and there are fears Ireland could follow suit. The decline in vaccine uptakes has been linked to the spread of misinformation – or “fake news” – on social media platforms. Minister Harris threw down the gauntlet to social media companies to “decide which side they want to be on” and take decisive action to help reverse this trend.
The challenge of regulating tech
companies in the public interest, particularly social media platforms, has been
explored in depth. Proposed measures become entangled in overlapping areas of
tech, policy, piracy, free speech and platform liability. Differentiating between illegal speech and
‘opinions I don’t agree with’ (like vaccine disinformation) presents serious
challenges to freedom of expression and plurality; at the same time making
social media platforms the arbiters of truth is manifestly undesirable.
Regulatory overreach would likely be detrimental to the free access to the services
that modern society has come to rely upon as well as stifle innovation.
On the other hand the algorithms
that drive the social media companies’ traffic favour provocative content that
engages users and prolongs their time on the platform, providing a captive
audience for targeted ads. In effect, social media has become weaponised to
serve advertisers – disinformation is a profitable business.
Across the board the response has been a patchwork of work-arounds. In 2018, the European Commission published its report on a ‘Multi-dimensional Approach to Disinformation’, which opted for a co-regulatory Code of Practice and promotion of media literacy. Companies themselves have taken initiatives to manage misinformation – the broad consensus around the potential harm caused by vaccine misinformation has assisted this agenda. Facebook works with third-party checkers to reduce the distribution of stories that have been flagged as misleading. Instagram has said it would hide hashtags that have a “high percentage” of inaccurate vaccine information with mixed results. Twitter is launching a new tool that directs users to credible public health resources. In February this year, YouTube announced that is demonetising anti-vaccination content. This month, Google adjusted its search algorithm to boost original journalism.
Clearly, a degree of self-regulation
has already been adopted by these tech giants. But private entities, that are change
agents in the areas of privacy, competitiveness, freedom of speech and national
security and law enforcement, operating without oversight run the risk of the
tail wagging the dog.
As part of its remit to transpose
the Audiovisual Media Services
Directive (EU) 2018/1808 (AVMSD) into Irish
law for September 2020, the Broadcasting Authority of Ireland (BAI) will
effectively become EU-wide watchdog for video on-demand services that are based
in Ireland. Under the Directive, providers will require age verification,
parental controls and a ‘robust’ complaints mechanism. The BAI would become a
statutory regulator with legally enshrined enforcement power to police social
media sites’ video content.
The UK government published its Online
Harms White Paper on 26 June 2019 which proposed both government and
industry-led initiatives including developing a regulatory framework and
independent regulator, user redress and a statutory duty of care imposed on
social media companies which focuses on a set of desirable outcomes that it
would leave to the companies to decide how to implement, not unlike the
implementation regime for GDPR. Apart from the measures to be adopted as part
of its duties with respect to the AVMSD, the Irish government has not proposed
any parallel regime, despite the obvious and pressing need to do so in light of
Ireland’s unique position as EU country of incorporation for a large number of global
social media companies. Until it does, government talk about the onus being on social
media companies to decide which side they want to be on, is cheap.
In Ireland, it’s increasingly common to see the term “privacy” used interchangeably or as a substitute for “data protection”. This may be due to lack of awareness, the influence of U.S. focused organisations, or marketing preferences for a catchier term. Whatever the reason, it is important to know the legal difference between the terms in order to avoid confusion about rights and obligations.
Privacy is a broad term encompassing a number of rights, including the right to be let alone and the right to respect for private and family life, home and communications. A useful description of privacy is from the UK’s Calcutt Committee report of 1990 as “the right of the individual to be protected against intrusion into his personal life or affairs, or those of his family, by direct physical means or by publication of information.” In Ireland, privacy rights derive mainly from the Constitution of Ireland (as one of the unenumerated personal rights recognised by Article 40.3), Article 8 of the European Convention on Human Rights and Article 7 of the EU Charter of Fundamental Rights.
Data Protection means the protection of individuals in relation to the collection and use of their personal data. In Ireland, data protection is governed by Article 8 of the EU Charter of Fundamental Rights, the General Data Protection Regulation (GDPR) and the Data Protection Act 2018. Organisations have legal obligations in relation to processing of personal data and individuals have rights, including information, access, rectification, objection and erasure. Other rights include lodging a complaint with the Data Protection Commission and receiving compensation for damage suffered as a result of a data controller or processor not complying with their GDPR obligations.
While the GDPR should not be considered a privacy law (and the word “privacy” does not appear in its articles or recitals), data protection and privacy are sometimes related in practice because the same factual situation can engage with both legal concepts. For example, the risk of harm to an individual resulting from how their personal data is processed could also involve a breach of privacy depending on the circumstances.
However, we recommend keeping “privacy” and “data protection” distinguishable to avoid confusion and misunderstandings about legal rights and obligations. It is important for individuals to know they have data protection rights, and for organisations to know they have data protection obligations, in situations which often have nothing to do with privacy.
The GDPR is principles-based and
these principles inform everything that flows from their application, including
its scope to be technology neutral, as is expressly mentioned in recitals, and thus future
proof. Blockchain technology is built on blocks of digital information, or
nodes, that are distributed across multiple data controllers. Each node builds
on the last and, to maintain the integrity of the chain, cannot be modified or
altered after each transaction is completed.
The challenges that blockchain’s presents
to the GDPR framework are immediately apparent. The GDPR is built on the
presumption of an identifiable data controller, or joint controllers, who is
accountable for how personal data is processed. Moreover, the technical
specificities of the blockchain model are not easily aligned with data subjects’
rights to rectification or erasure of personal data, or the right to be
forgotten. As the technology creates a perpetual ledger, principles such as
data minimisation and storage also fall foul.
The report also identifies
various ways in which blockchain can be used to advance GDPR objectives;
without the need for a single (or joint) data controller, it offers
transparency over who has accessed data. Data subject rights of access and
portability are facilitated by the technology. Ultimately, where blockchain
technology has been in the processing of personal data, its compliance with
GDPR should be assessed on a case-by-case basis taking into consideration
factors such as the context (public v private) and whether the encryption of
the data meets the threshold for anonymisation.
The above-mentioned EP resolution
makes it clear that there is an explicit intention to support the adoption of
blockchain technology across the EU. For GDPR compliance the report proposes
regulatory guidance, codes of conduct and certification mechanisms to provide
guidance. Alternately, research funding could made available for
interdisciplinary research on blockchain protocols that could be ‘compliant by
What is clear is that at
present there is nothing concrete in the pipeline that will assuage the
concerns of privacy advocates and the question remains – where there is a will
can a way be found?
The WRC found that daft.ie has
vicarious liability for advertisements placed on its website by third parties
where these were in breach of the Acts. It held that daft.ie was not shielded
from these obligations by virtue of its status as an ‘Information Society
Service Provider’ (ISSP) under the eCommerce Directive (2000/31/EC), which
allows internet intermediaries limited exemption from secondary liability as ‘conduits’
or transmitters of information.
The WRC adjudicator declared that there was no evidence before it to allow the exemption under the Directive and it directed daft.ie to implement a methodology to “identify, monitor and block discriminatory advertising on its website”. Without a more detailed description – and none is offered in the ruling – of how the WRC adjudicator arrived at her decision, it is impossible to assess the weight that was given to various factors outlined in the submissions.
The carving out of the exemption is a cornerstone of the eCommerce Directive and aims at preserving the intermediary model and preventing unintended collateral censorship. The Directive does allow for a service provider to terminate or prevent an infringement at the direction of a court or administrative authority; however, the direction by the adjudicator for daft.ie to put in place a system that would automatically flag potentially discriminatory postings to the site conflicts with provisions of the Directive that prevent member states from imposing a general obligation on ISSPs to monitor the information that they transmit or store, nor to actively seek facts or circumstances indicating illegal activity.
Considering the near inevitability of occasional users of daft.ie’s services posting adverts using discriminatory language, the question remains whether daft.ie is the correct entity to automatically screen out all potentially discriminatory language. In light of rapidly evolving technological solutions to ‘policing’ the internet, there have been high-level discussions about whether the current Directive needs to be revised. Certainly, the WRC decision referred to daft.ie competitors that have adopted a process such as the one it proposes. Nevertheless, as the law currently stands, there is a question mark over whether the WRC correctly applied EU law and whether on appeal the High Court would uphold its decision.
In practice, we are coming across some confusion about the GDPR’s risk-based approach to compliance. It is important not to confuse this compliance concept with the risk facing your business or organisation.
Risk is the possibility of something happening which may cause loss, harm or damage, and it is usually assessed in terms of likelihood x severity.
In a GDPR compliance context, and where a risk-based approach is relevant, this means risk to the rights of individuals that may result from you processing their personal data. In other words, risk to individuals, not to your business or organisation.
GDPR compliance obligations that refer to risk include accountability measures, security measures, data breach notification, Data Protection Impact Assessment (DPIA), and privacy by design.
While the term “risk” is not defined in the GDPR, Recital 75 provides useful guidance.
When undertaking risk assessment, risk for GDPR compliance purposes (risk to individuals) must be understood and distinguished from the important operational risks affecting your organisation (risk to you) including possible regulatory sanctions, reputational harm and/or legal action that might result from GDPR non-compliance.
With the prospect of increased regulatory activity ahead, it is important for businesses and organisations to ensure that their Privacy Notices are compliant.
Privacy Notices are an essential part of GDPR transparency obligations, and it should be transparent to individuals that their personal data is being processed, to what extent, and what their rights are.
More than one Privacy Notice version may be needed depending on the category of individuals involved (customers, employees, etc.).
The information has to be provided in a clear, easily accessible format at the time personal data is obtained from individuals, or within one month when obtained from another source.
The mandatory categories of information to be provided are set out in Articles 13-14, and include purposes of processing, legal basis for processing, legitimate interests for processing (if applicable), data sharing, international transfers, and data retention.
Working all this out, with documentation to meet the requirements of accountability, can be challenging for businesses and organisations.
It may be necessary to refresh data mapping or review justifications for legal basis.
Privacy Notices should also align with the Records of Processing Activities (Article 30).
Privacy Notices are a critical part of GDPR compliance, but they are not a once-off exercise, and must be kept under review to reflect your processing activities.