Category Archives: Databases

IP rights in geospatial data

Several strands of information come together. On 14th December was published the latest court decision in the long-running saga between 77M Limited and Ordnance Survey Limited.

77M is a private company that has developed and is commercialising a database of UK land and properties and has, or at some time in the past had, a licence from the UK Land Registry (HMLR) to access one of the latter’s databases.

The Ordnance Survey (OS) was formed several hundred years ago and was originally part of the army. Nowadays it is a company owned by the government. It is best known for providing high-quality maps of the United Kingdom. When IP Draughts was at junior school, he was taught about the the OS’s 1 inch to 1 mile (or 1:36,360) map. Since metrication in the UK in the late 1960s and 1970s, this set of maps was discontinued, and the closest equivalent has been the 1:50,000 map. In other words, like HMLR, OS has developed so-called geospatial data in relation to the UK.

The case linked above is a decision of Mr Justice Arnold, who rejected an application for summary judgment by OS. OS sought to have a claim by 77M rejected, that OS had induced HMLR to breach a contract under which it supplied property-related data from its database to 77M.

The arguments in that case are only of passing interest, and anyway the underlying facts are not fully explained in this interim decision. Arnold J noted that the contract in question was described as a “contract schedule” (suggesting to IP Draughts that it might have originally been part of a larger master services agreement, although this is not stated in the decision) and provided for a fee of £2,500 in return for undertaking certain searches. 77M argued that this was an “ongoing” contract, while OS argued it was a one-off contract. After reviewing clauses of a “lamentably badly drafted” contract that pointed in either direction, Arnold J declined to hold that OS’s case was so strong that it should get summary judgment. The interpretation of the contract should await full trial of the action.

The larger dispute between 77M and OS has been rumbling through the courts, and has not finished yet. An earlier hearing considered the question of when it was appropriate to transfer cases between the low-cost Intellectual Property Enterprise Court and the High Court.

Standing back from the case, what is going on? IP Draughts has no inside information about the case, but he is aware that the UK government is pressing ahead with plans to develop a national strategy to commercialise the UK’s geospatial data, much of which has been developed by government bodies and agencies, including OS and HMLR.

To help formulate this strategy, the government is forming a Geospatial Commission. The appointments of the chair and vice-chair were recently announced. The deputy chair is a former CEO of OS. The announcement refers to the commission’s role being to “drive the use of location-linked data more productively, to unlock up to £11 billion of extra value for the economy every year”.

Other documents identify OS and HMLR as some of the main custodians of this data.

IP Draughts wonders whether small, private companies that are already using UK geospatial data may compete with the government’s ambitious plans. This doesn’t necessarily mean that it is wrong for a government agency to terminate a commercial licence agreement, or for another government agency to encourage it to do so. IP Draughts doesn’t have enough information (geolegal data?) to form a view on this question. But it is curious that this case is rumbling on at the same time as the Geospatial Commission is being formed.

 

Leave a comment

Filed under Contract drafting, Databases, Intellectual Property

Big data, big policy decisions

First of all, thanks to the many readers who have commented on the last posting on this blog, which ruminated on its future. Your comments were very helpful (and also very kind). IP Draughts has not yet taken any major decision, and for the time being will continue as before.

Today’s theme is “big data” and the policy decisions that accompany it (not them, please!).

IP Draughts has come across this subject in several contexts recently. There is health data, such as that held by the UK National Health Service (NHS) about its patients. Several of our clients have been involved in licensing-in or licensing-out such data, whether as a hospital, university or start-up technology company. These activities can raise some significant data protection issues, but fortunately several members of our team have become very familiar with this area of law, including Francis Davey and Stephen Brett.

On the public stage, there have been well-publicised initiatives to mine such data. Lord Drayson recently raised £60 million from investors on the AIM market, for his company, Sensyne Health, which has entered into agreements with several NHS Trusts. He is reported as saying:

The NHS has a “responsibility to society” to make money out of patient data rather than allowing the profits to be captured by US technology companies…

[there is] an “ethical imperative” to use anonymised data to improve care.

The national focus on big data is not confined to the health field. So-called geospatial data is also under the spotlight. In last Autumn’s Budget, the UK’s Chancellor of the Exchequer announced the formation of a Geospatial Commission, which would “maximise the value of all UK government data linked to location, and to create jobs and growth in a modern economy.” More recently, the government has declared:

From emergency services, transport planning, and 5G networks, to housing, smarter cities and drones – the UK’s geospatial infrastructure has the potential to revolutionise the UK’s economy.

The government is currently recruiting for members of this commission and for the civil servants that will support them. The commission will set a strategy for commercialisation of the nation’s geospatial data, working with the main agencies that currently hold the data, including the Ordnance Survey and the Land Registry.

National initiatives spawn national policies and codes of practice. Where personal data is involved, and where the custodian of the data is a public body such as the NHS, documents of this kind are perhaps inevitable. The latest one to cross IP Draughts’ desk is called “Initial code of conduct for data-driven health and care technology“. It sets out “10 key principles for safe and effective digital innovations, and 5 commitments from the government to ensure that the health and care system is ready and able to adopt new and innovative technology at scale.” The document’s introduction explains the government’s underlying thinking:

Today we have some truly remarkable data-driven innovations, apps, clinical decision support tools supported by intelligent algorithms, and the widespread adoption of electronic health records. In parallel, we are seeing advancements in technology and, in particular, artificial intelligence (AI) techniques. AI is being used on this data to develop novel insights, tools to help improve operational efficiency and machine learning driven algorithms, and clinical decision support tools to provide better and safer care.

This presents a great opportunity, but these techniques are reliant on the use of data that the NHS and central government have strong duties to steward responsibly. Data-driven technologies must be harnessed in a safe, evidenced and transparent way. We must engage with patients and the public on how to do this in a way that maintains trust.

AI, AI, Oh!

The 10 principles are not particularly surprising or radical for anyone familiar with GDPR and government policy generally; what is noteworthy is that the principles have been brought together and published for the circumstances of big health data. They are explained in more detail in the document itself, but the headings are:

  1. Define the user
  2. Define the value proposition
  3. Be fair, transparent and accountable about what data you are using
  4. Use data that is proportionate to the identified user need (data minimisation principle of GDPR)
  5. Make use of open standards
  6. Be transparent to the limitations of the data used and algorithms deployed
  7. Make security integral to the design
  8. Define the commercial strategy
  9. Show evidence of effectiveness for the intended use
  10. Show what type of algorithm you are building, the evidence base for choosing that algorithm, how you plan to monitor its performance on an ongoing basis and how you are validating performance of the algorithm

The possibilities of big data, artificial intelligence (AI) and algorithms seem to have captured the attention of the UK government. These developments should mean more work for IP and IT lawyers and for technology transfer managers –  and help to offset the likely negative effects for this part of the UK economy that will result from Brexit.

1 Comment

Filed under Databases, Legal policy

Blast from the past: is software ‘goods’?

Back in the 1980s, when brightly-coloured tracksuits were in fashion, IP Draughts took a part-time course in IT law at Queen Mary University. One of the subjects that he earnestly studied was whether the supply of software amounted to a sale of goods, for the purposes of the Sale of Goods Act 1979.

He was convinced that it didn’t amount to a sale of goods, and he carried this conviction with him into the 1990s, when he wrote his first book, Technology: the Law of Exploitation and Transfer (Butterworths, 1996). The third edition of that work, now called simply Technology Transfer (Bloomsbury, 2010), discusses at pages 461-466 the legal issues involved in this question, and in the related question of whether the sale of a patent could amount to a sale of goods. The discussion briefly mentions the 1995 case of St Albans District Council v ICL, in which the Court of Appeal considered (obiter) that the answer to this question might depend on whether the software was supplied on a disk.

IP Draughts has long felt that this is a ridiculous distinction to make, as the nature of software is not changed by the medium or method by which it is supplied. The value in the software depends on the electronic content, not the piece of plastic on which it is, or not, delivered. However, as mentioned below, this is a distinction that the courts have used to justify their decisions in subsequent cases.

The fourth edition of that book is now being written, and will mention a new case that continues the judicial debate on this subject. The Court of Appeal case of Computer Associates UK Ltd v The Software Incubator Ltd [2018] EWCA Civ 518, appeared on the BAILII website last week. The main question to be decided was whether, for the purposes of EU law on commercial agents, the supply of software (typically by download over the internet) amounted to a sale of goods.

At first instance, His Honour Judge Waksman QC had decided that it did amount to a sale of goods. In the Court of Appeal, Gloster LJ, giving a judgment with which her fellow judges agreed, decided that it did not.

Gloster LJ’s judgment considers certain English, EU and other case law in this field, including the St Albans District Council case. IP Draughts has a great deal of sympathy with Gloster LJ’s comment, at paragraph 45 that:

…I am somewhat uncomfortable with a conclusion that the tangible/intangible distinction leads to a construction of “goods” that excludes the Software, which seems artificial in the modern age. However, I consider this to be justified given the commercial context and notwithstanding the superficial attraction of the respondent’s arguments, which I next consider.

After considering the arguments and case law in further detail, including the fact that the Consumer Rights Act 2015 introduced a new concept of supplying “digital content”, she reaches the following conclusion:

I conclude that the judge was wrong in law in holding that the Software, which was supplied to CA’s customers electronically and not on any tangible medium, constitutes “goods” within the meaning of Regulation 2(1). I would therefore allow the appeal on this issue.

Hurrah!

 

 

 

Leave a comment

Filed under Databases, Intellectual Property, Licensing

Data consents: lets get granular

T201802 Sugar adhis blogger has previously discussed some of the difficulties in relying on consent as a justification for lawful processing under GDPR, but these difficulties bear closer examination.  First, the basics.  Then some thoughts on the use of consent in the research world and whether it is always needed.

The basics

Consent is one of the six lawful bases that justify the processing of personal data.  To be adequate, consent must be a freely given, specific, informed and unambiguous indication of the individual’s wishes by a statement or clear affirmative action – granular is the word the regulators use.  It is not silence or a pre-ticked opt-in box.  It is not a blanket acceptance of a set of terms and conditions that include privacy provisions.  It can be ‘by electronic means’ – it could be a motion such as a swipe across a screen.  But, where special category data (sensitive data such as health data) are processed and explicit consent is needed, this will be by way of a written statement.

The data controller must be able to demonstrate consent.   This goes to accountability – the controller is responsible for demonstrating compliance across the piece although GDPR does not mandate any particular method.

Consent must be requested in an intelligible and easily accessible form and must be clearly distinguishable from other matters.  The request cannot be bundled up and appear simply as one part of a wider set of terms.  When the processing has multiple purposes, consent should be given for each of them – granularity again.  Conflated purposes remove freedom of choice.

Consent must be freely given.  It must be a real choice.  Employers will always find it hard to show that their employees have consented freely, for example.  The choice needs to be informed.  Without information, any choice is illusory (the transparency principle).  As a minimum, the informed individual would need to know: the controller’s identity; the purpose of the processing; the data to be collected and used; and, that consent can be withdrawn.

It must be as easy to withdraw consent as it was to give it.  This doesn’t necessarily mean that withdrawal must be by the same action (swipe to consent and withdraw) but rather that withdrawal must be by the same interface (consent via the website, withdraw via the website).  After all, switching to another interface would involve ‘undue effort’ for the individual.  If consent is withdrawn, the individual must not suffer any detriment.

With pleasing circularity, demonstrating that withdrawal carries no cost and no detriment (meaning no significant negative consequences) helps to demonstrate that the consent itself has been freely given.

Consent in research world

Getting granular consent (meaning consent specific to a given purpose) can be repetitive.  Bundling up different consents in one is not allowed so multiple purposes make for long lists of consents and the risk of consenting fatigue.  Other lawful bases may be more convenient and consent should not be the default or unthinking route for controllers.  Aside from the high threshold for adequate consent, the GDPR’s transparency agenda means that there is a strong argument that if consent is given as the lawful basis at the outset there can be no substitution of a different legal basis if consent is withdrawn.

Getting granular consent can be difficult.  GDPR recognises that it may not be possible to fully identify the purpose of scientific research processing at the point of data collection and acknowledges that individuals could consent only to certain areas of research.  GDPR’s principles are relaxed for the benefit of scientific research but they continue to apply.  The purpose of the processing must still be described but it is enough for the research purpose to be ‘well described’ rather than specific.  Transparency is a safeguard where specific consent is not possible.  Research plans should be available.  Consent should be refreshed as the research progresses.

Consent must be freely given.  Does a research participant have a free choice?  Probably yes, if the intended processing is not arbitrary or unusual and if the information provided is adequate and accurate.  An informed refusal to join a clinical trial will not lead to standard treatment being withdrawn so there is no detriment.  But what if the standard treatment is not working?  If the individual has to consent to arbitrary processing of their personal data in order to take what may be their only remaining hope then it is difficult to see that as a free choice.

Consent can be withdrawn.  Researchers have some comfort in that processing that has already been carried out remains legitimate after consent is withdrawn.  But further processing must stop which threatens the ongoing research project, unless the data can be disentangled.  To make matters worse (for the researcher), if there is no other legal basis for holding the data then it may be necessary to delete it – more difficult disentangling, especially if the individual forces deletion through their right to be forgotten.

What can the worried researcher do about the risk of withdrawal?  Anonymise the data and carry on is always a good answer.  Rely on a different legal basis in the first place (and carry on) is another good answer.

Sidestepping the issue by making the consent irrevocable is not a good answer: it would breach the requirement that consent can be withdrawn at any time.

A sneaky lawyer’s answer may be to embrace the requirement that consent must be as easy to withdraw as to give.  If changing formats involves ‘undue effort’ then avoid electronic means and require consent to be in writing.  This answer is not guaranteed by any stretch of the imagination: the data controller is essentially betting that few will bother to put pen to paper to withdraw.

Clearly GDPR consent is a troublesome beastie but there is one strong point in its favour.  Using consent as the legal basis for processing promotes trust.  Repeatedly refreshing that consent as the research progresses reinforces trust.  Trust makes the engagement stronger.  Perhaps the prize of a stronger and more committed and engaged research cohort based on consent is worth it?

Leave a comment

Filed under Databases