Author Archives: Stephen Brett

About Stephen Brett

I qualified as a solicitor in 1997 and joined Anderson Law LLP in 2011. My specialism is technology transfer and I work with universities and research funders as well as SMEs. When I am not beavering away at work, and if I can duck my share of the childcare responsibilities, I like nothing more than to jump in a river (open water swimming) or dive into a good book.

Data consents: lets get granular

T201802 Sugar adhis blogger has previously discussed some of the difficulties in relying on consent as a justification for lawful processing under GDPR, but these difficulties bear closer examination.  First, the basics.  Then some thoughts on the use of consent in the research world and whether it is always needed.

The basics

Consent is one of the six lawful bases that justify the processing of personal data.  To be adequate, consent must be a freely given, specific, informed and unambiguous indication of the individual’s wishes by a statement or clear affirmative action – granular is the word the regulators use.  It is not silence or a pre-ticked opt-in box.  It is not a blanket acceptance of a set of terms and conditions that include privacy provisions.  It can be ‘by electronic means’ – it could be a motion such as a swipe across a screen.  But, where special category data (sensitive data such as health data) are processed and explicit consent is needed, this will be by way of a written statement.

The data controller must be able to demonstrate consent.   This goes to accountability – the controller is responsible for demonstrating compliance across the piece although GDPR does not mandate any particular method.

Consent must be requested in an intelligible and easily accessible form and must be clearly distinguishable from other matters.  The request cannot be bundled up and appear simply as one part of a wider set of terms.  When the processing has multiple purposes, consent should be given for each of them – granularity again.  Conflated purposes remove freedom of choice.

Consent must be freely given.  It must be a real choice.  Employers will always find it hard to show that their employees have consented freely, for example.  The choice needs to be informed.  Without information, any choice is illusory (the transparency principle).  As a minimum, the informed individual would need to know: the controller’s identity; the purpose of the processing; the data to be collected and used; and, that consent can be withdrawn.

It must be as easy to withdraw consent as it was to give it.  This doesn’t necessarily mean that withdrawal must be by the same action (swipe to consent and withdraw) but rather that withdrawal must be by the same interface (consent via the website, withdraw via the website).  After all, switching to another interface would involve ‘undue effort’ for the individual.  If consent is withdrawn, the individual must not suffer any detriment.

With pleasing circularity, demonstrating that withdrawal carries no cost and no detriment (meaning no significant negative consequences) helps to demonstrate that the consent itself has been freely given.

Consent in research world

Getting granular consent (meaning consent specific to a given purpose) can be repetitive.  Bundling up different consents in one is not allowed so multiple purposes make for long lists of consents and the risk of consenting fatigue.  Other lawful bases may be more convenient and consent should not be the default or unthinking route for controllers.  Aside from the high threshold for adequate consent, the GDPR’s transparency agenda means that there is a strong argument that if consent is given as the lawful basis at the outset there can be no substitution of a different legal basis if consent is withdrawn.

Getting granular consent can be difficult.  GDPR recognises that it may not be possible to fully identify the purpose of scientific research processing at the point of data collection and acknowledges that individuals could consent only to certain areas of research.  GDPR’s principles are relaxed for the benefit of scientific research but they continue to apply.  The purpose of the processing must still be described but it is enough for the research purpose to be ‘well described’ rather than specific.  Transparency is a safeguard where specific consent is not possible.  Research plans should be available.  Consent should be refreshed as the research progresses.

Consent must be freely given.  Does a research participant have a free choice?  Probably yes, if the intended processing is not arbitrary or unusual and if the information provided is adequate and accurate.  An informed refusal to join a clinical trial will not lead to standard treatment being withdrawn so there is no detriment.  But what if the standard treatment is not working?  If the individual has to consent to arbitrary processing of their personal data in order to take what may be their only remaining hope then it is difficult to see that as a free choice.

Consent can be withdrawn.  Researchers have some comfort in that processing that has already been carried out remains legitimate after consent is withdrawn.  But further processing must stop which threatens the ongoing research project, unless the data can be disentangled.  To make matters worse (for the researcher), if there is no other legal basis for holding the data then it may be necessary to delete it – more difficult disentangling, especially if the individual forces deletion through their right to be forgotten.

What can the worried researcher do about the risk of withdrawal?  Anonymise the data and carry on is always a good answer.  Rely on a different legal basis in the first place (and carry on) is another good answer.

Sidestepping the issue by making the consent irrevocable is not a good answer: it would breach the requirement that consent can be withdrawn at any time.

A sneaky lawyer’s answer may be to embrace the requirement that consent must be as easy to withdraw as to give.  If changing formats involves ‘undue effort’ then avoid electronic means and require consent to be in writing.  This answer is not guaranteed by any stretch of the imagination: the data controller is essentially betting that few will bother to put pen to paper to withdraw.

Clearly GDPR consent is a troublesome beastie but there is one strong point in its favour.  Using consent as the legal basis for processing promotes trust.  Repeatedly refreshing that consent as the research progresses reinforces trust.  Trust makes the engagement stronger.  Perhaps the prize of a stronger and more committed and engaged research cohort based on consent is worth it?

Leave a comment

Filed under Databases

Using personal data in research: all change….?

Pondering, as one does, the likely impact of the General Data Protection Regulation on one’s working life, this Blogger has been trying to figure out how simple it will be to use personal data for research purposes (especially research in healthcare) after 25th May 2018 – the day on which the GDPR comes into force.  Before you ask, whatever happens with Brexit, the timing is such that the GDPR will come into force in the UK.

The GDPR is similar and yet different to the present Data Protection Act.  Similar in that the use of personal data is still governed by a series of principles and that processing of personal data must have a lawful basis.  Different in the detail of the duties placed on data controllers and processors, the rights granted to individuals and the justifications available to show that data is being processed lawfully.  For now, this Blogger is focussing on the research use context.

20171014 Latitude

Oxford is 51.7520° N

The GDPR allows some latitude for research uses.  ‘Latitude’ is not the same as ‘get out of jail free’.  If research users apply appropriate safeguards and data minimisation (limiting any processing to the extent necessary for the particular purpose) then some of the individual’s rights may be excluded.  But the core principles of the GDPR still apply.

Today, consent is the researcher’s go to justification for processing personal data.  Under the DPA and the GDPR, processing is lawful if the individual has given consent.  However, GDPR consent is a different animal to DPA consent.  The GDPR sets higher standards in terms of information (specific uses and specific recipients should be listed) and record keeping.  The GDPR is clear that it must be as easy to withdraw as to give consent – potentially really troublesome for a research project.  The ICO’s draft guidance talks of obtaining granular consent that describes in advance all the proposed uses of the personal data and everybody who will have access to the personal data.  The consent will have to be specific and records comprehensive.  Under the DPA a researcher can be (fairly) comfortable with wording consenting to the use of personal data for a defined project ‘and other related research’.  Under the GDPR, the researcher will have to describe the project (ie the intended use) and list all those that will have access to the personal data and explain which other projects the personal data may be used for.  In effect, ‘if you’re not on the list, you’re not coming in’.  Thankfully, a pragmatic ICO recognises that not all future research uses can be specified in advance and the guidance allows some scope to ‘do the best you can’.

The result of these changes?  From the morning of 25th May2018, existing consents may be rendered inadequate. 20171014 Exclamation

Can you hear the sounds of the research based economy grinding to a halt?  Be afraid, but not petrified.  Other possible means of demonstrating that processing has a lawful basis may be available.

First possibility is legitimate interest: GDPR treats processing as lawful to the extent that it is necessary for the purposes of the legitimate interests of the data controller as balanced against the impact on the individual concerned.  An interest is the broader aim or stake that the controller has in the processing.  It does not need to be described in advance but it will need to fall within the reasonable expectations of the individual.

The problem for healthcare research is that sensitive personal data (classified under GDPR as a ‘special category’), can only be processed where one of a separate list of exemptions applies.  The special categories include data concerning health.  This separate list of exemptions does not include legitimate interest: the legitimate interest justification does NOT justify the use of health data for research purposes.

Second possibility is that processing a special category of data is permitted where it is necessary for scientific research conducted in accordance with appropriate safeguards and where use of the data is proportionate to the research aim.  Useful but the emphasis is on ‘necessary’, ‘appropriate safeguards’ and ‘proportionate’.

A third possibility is to use anonymous data.  Like the DPA, the GDPR only applies to data relating to an identified or identifiable individual.  Currently, individuals do not have to give their consent for their personal data to be anonymised.  So, anonymise the data and all your problems fall away.

20171014 Stig

Anonymous or not…?

Inevitably, it is not that easy.  How anonymous does the data have to be before it no longer relates to a living and identifiable individual?  Today’s test is whether the anonymization process is robust enough to be likely to defeat the efforts of the Motivated Intruder (about whom this Blogger has mused before).  The problem is that big data makes more things are possible.  More pieces of the jigsaw are available to be found and linked together.  The Motivated Intruder doesn’t have to try too hard.

Despite its difficulties, consent may still be a useful possibility.  The GDPR permits processing of special category data where the individual has given explicit consent for a specified purpose.  The granular nature of consent has already been considered: proposed uses must be specified in advance.  In addition, the consent cannot be coerced – an outcome cannot be conditional on consent being given.  This may be a problem for commercial providers (‘you can only use this service if you give me all your personal data’).

20171014 Pencils

A simple answer: the Russians took a pencil…

It is less likely to be a problem in research world.  Does ‘you must consent if you want to participate in this clinical trial’ amount to imposing a condition?  Probably not.  Research is not the provision of goods or a service.  But the problem remains that it must be as easy to withdraw consent as it was to give consent in the first place.  Consent is not a simple answer.

Clearly, researchers (and their admin support!) will have to plan carefully to comply with GDPR.  That is not a Bad Thing: behind every data point there is an individual who deserves protection.  In any case, facing more detailed provisions is not the same as being prevented from performing research.  The GDPR is an intricate piece but, like eating an elephant, it can be dealt with in small chunks.  So, as a starting approach for those wishing to use personal data in their research:

First, establish what data it is that you wish to process.   Do you need to process all of it (data minimisation)?  Could you use anonymous data instead?

Second, establish whether it is a special category of data (eg health data) and if so, whether the intended use is permitted by any of the available exemptions:  including necessary for scientific research, consent (granular) or legitimate interest (but not for health data).

Third, if it is not a special category of data, or, if it is a special category but there is an exemption available, then check that the proposed processing is lawful.  Essentially that means demonstrating that Article 6 of the GDPR is satisfied.  That is worthy of a separate blog post in itself…

Simple.

Leave a comment

Filed under Intellectual Property, universities

New legal superhero (or supervillain) is born: the Motivated Intruder

The man on the Clapham omnibus should not be confused with Hector the Tax Inspector

The man on the Clapham omnibus should not be confused with Hector the Tax Inspector

English law is full of fantastical creatures.  Pride of place goes to the Reasonable Man who is, by all accounts, an ordinary and prudent person, who is bowler-hatted and most commonly found on the Clapham omnibus.

He gets everywhere (he has cousins on the Bondi tram and the Shau Kei Wan tram).  He is free from over-apprehension and from over-confidence.  He provides a neutral standard that assists the bemused lawyer to assess whether or not any particular act is negligent.

Contract lawyers know the Officious Bystander well.  He occasionally interrupts proceedings to suggest terms for inclusion in contracts which are so obvious that they can be implied and do not need to be stated.

The Man on the Bondi Tram retired in 1960.  Mr Pettifog remembers him well.

The Man on the Bondi Tram retired in 1960.

There is the Informed User who is something more than a consumer, knowing a fair bit about the existing design corpus but who is most definitely not an expert in the field.  He helps us to establish the boundaries of individual character in registered designs.  Or there is his close friend from the world of patents, the Person Skilled In The Art (aka the Nerd With No Imagination).  He is widely read in his field but has no imagination.  If he wouldn’t have thought of it, an invention satisfies the requirement of novelty necessary for the grant of a patent.

Less impressive is the Moron In A Hurry.  If two items are so different that they would not confuse even the Moron In A Hurry then there is no confusion and no passing off or trade mark infringement.

IP Draughts confesses that he had not heard of the Man on the Shau Kei Wan Tram

IP Draughts confesses that he had not heard of the Shau Kei Wan Tram

There is now another character to add to the fold: the Motivated Intruder.

The Information Commissioner’s Office highlighted his existence in November 2012, although some sightings date back to 2008.  He (or quite possibly she) has been quietly permeating the vexed topic of effective anonymisation.  This is more interesting than it sounds and currently matters a great deal to academic researchers although I predict it will soon matter just as much to insurance companies.

Old Etonian classics scholar, and Mayor of London, emonstrates the correct use of an omnibus

Old Etonian classics scholar, and Mayor of London, demonstrates the correct use of the masculine dative plural of “omnis”

Under UK law, information about a living and identifiable person can only be processed in accordance with the terms of the Data Protection Act.  To generalise, if you don’t have the individual’s consent (informed and freely given), you can’t use their data.  This is an issue for researchers keen to use the huge repository of data collected by the National Health Service (NHS).  The NHS holds a treasure trove of useful information but it was collected for clinical care purposes, not for research.  Obtaining individual consent permitting personal data to be used for research purposes just isn’t practical.  Cue much gnashing of academic teeth at the wasted opportunity.  But there is hope.  If the data is anonymous, it does not qualify as personal data and the restrictions of the Data Protection Act fall away.

Consent is not necessary in order to perform the act of anonymising personal data.  However, the question that now looms is just how anonymous information has to be to ensure that it is no longer classed as personal data.  The Data Protection Act is concerned with the likelihood of re-identification rather than with the possibility.  It boils down to needing to know whether any given method of anonymisation renders the information so secure that it is not reasonably likely that individuals, even individuals with rare medical conditions living in sparsely populated regions, will be re-identified.

20140318 Ta-DahHow can the researcher be confident that the data has been effectively anonymised and therefore is not personal data?  Enter the Motivated Intruder.

This character has no prior knowledge but wants to identify an individual from an anonymised dataset.  The Motivated Intruder is competent, has access to resources such as the internet and public documents, and, will take all reasonable steps to try to re-identify an individual from the anonymised dataset.  But the Motivated Intruder does not have specialist skills and will not break the law.  He sits somewhere between the inexpert member of the public and the skilled specialist.  If the statistical method used would defeat the Motivated Intruder then the data can be treated as anonymous and used with confidence by the researcher.

Unfortunately, the Motivated Intruder is still a youngster.  There are few examples of his work.  In some cases, it has been enough to defeat the Motivated Intruder to redact certain aspects of the dataset such as the dates and locations of medical incidents.  In others the likelihood of identification was low enough that statistical information relating to same sex adoption and (in a separate case) to school entrance exams was effectively anonymised and could be released.  In another case, the raw data from a clinical trial could not be effectively anonymised and therefore should not be released.  There are questions that remain to be answered: just how hard will the Motivated Intruder try?  What sort of information does the Motivated Intruder care most about?  How much embarrassment or anxiety can the individual who is identified be expected to tolerate?

An earlier sighting

As with the Loch Ness Monster, we need a clearer picture…

As time goes on and the Motivated Intruder is cited (sighted… geddit? Unfortunately, yes. Ed) more often so we will have a clearer picture and so researchers can proceed with greater confidence.

In fact, the Motivated Intruder has the potential to play a starring role in an information debate coming to your screens in the very near future.  The care.data project has been put on ice because of growing public concern that anonymised health data could find its way into the hands of unscrupulous insurance companies who would promptly and easily re-identify it and use it to push our premiums up.  Time to call for the Motivated Intruder to restore public confidence?  Or is it too late for that?  The Motivated Intruder focuses on the likelihood of re-identification.  Public opinion might well be focussed on the possibility of re-identification.

PS IP Draughts is curious to know if there are any other fictional legal characters, not mentioned above, in readers’ jurisdictions.  He wonders whether the woman on the Edinburgh tram could be a candidate. Please let us know via this blog’s comments.

3 Comments

Filed under Confidentiality, Databases

Research results: more moonbeams in a jar?

braveI embarked on this post with trepidation.  It’s always worrying when you start out thinking that you have a problem and that you disagree with IP Draughts.  The problem involves know-how: how should you deal with it, when it is part of the output of a research project?

Many research projects produce identifiable and protectable IP. There may be negotiations over who should own that IP, and the law gives us a set of rules that explain who is the first owner if different terms are not agreed.

But research projects also produce interesting things that are not obviously classified as identifiable and protectable IP – the ill-defined stuff that is sometimes called ‘know-how’.  Partners in the research want (demand, in some cases) rights to continue to use the know-how or to stop other people from using it.   Research funders want to own it as an output of the research.   IP Draughts says it is not something that can be assigned. [Don’t take any notice of his maunderings.  Ed.]

Two examples of know-how occupy my world:

Results – actual hard data.  As explained elsewhere, there is no property in information itself.  If it is to be protected, it needs to be covered by confidentiality or fixed as an acknowledged piece of IP.  It could be recorded as a copyright work or entered into a database.  But that approach doesn’t really deal with the whole problem.  The copyright owner can prevent the copyright work from being copied without permission but the data probably also exists outside of the copyright work (eg the difference between a publication and the data it is based on).  As for databases, it is the database, and the investment in creating the database, that is protected by database right and not the data itself.  Database right gives the owner of the database the right to control the use of the database rather than the content of the database.  Neither protects the data itself.  Conclusion: confidentiality is the best bet to protect the data.

Methods – specifically, refinements to established surgical methods that arise in the course of clinical trials.  Again, there is no property in the information itself.  Patenting is unlikely to be the answer (cost, difficulty in demonstrating novelty, exclusion of certain methods from patent protection).  Copyright offers some hope but the same arguments apply.  Conclusion (again): if the method is to be protected at all, confidentiality is the best bet.

Daguerrotype stereotype

Daguerrotype stereotype

The researcher will want to be able to continue to use the know-how.  The funder, stereotypically, will want to protect, restrict and commercialise.  And so we turn to the law to find out who owns the know-how and who can control it and, potentially, how to transfer it.

We know that know-how is practical information that is secret, substantial and identified (EU Tech Transfer Reg (772/2004)) and we know that know-how is not property and cannot be assigned (IP Draughts commentaries).

English law protects trade secrets but will not protect know-how of itself (Faccenda Chicken).  Know-how is treated as a lesser being than a trade secret and needs to shelter behind confidentiality.  The EU is currently considering a draft Directive introducing specific protections for trade secrets (EU draft Dir 2013/0402) that would impose penalties for the unlawful acquisition, disclosure or use of trade secrets.  A trade secret is defined as information that is secret, has commercial value because it is secret and has been the subject of reasonable steps to keep it secret.  On that definition, know-how might amount to a trade secret in some cases.  But again we see that confidentiality is key.  We also see a requirement for that confidentiality to protect a commercial value.

small printSo, it looks like the answer to my problem is that no one really owns know-how, although you can expect to control it if you have the benefit of a confidentiality obligation that covers it.  There might be property in related IP rights but my sense is that these IP rights will often not catch the guts of the know-how – the data or the method.  Phew, I agree with IP Draughts.

And the lessons for our researcher?  Read the definitions in the research agreement and read them carefully.  Then see where and how those definitions are used.  If know-how is referred to in the definition of Confidential Information and the funder controls the Confidential Information, the researcher’s right to use their general know-how may be restricted.  If the definition of Intellectual Property includes data or methods, we may have ‘category confusion’ particularly if that data and those methods are not protected by any recognised form of IP.

A good (?) example of this is the EU’s Horizon 20/20 programme, whose contract terms define Background as follows:

Background’ means any data, know-how or information — whatever its form or nature (tangible or intangible), including any rights such as intellectual property rights — that: (a) is held by the beneficiaries before they acceded to the Agreement, and (b) is needed to implement the action or exploit the results.

Faced with a definition like this, which mixes up know-how, information and IP rights, you need to look very carefully at where the defined term is used and how it affects you.

2 Comments

Filed under Confidentiality, Intellectual Property