How to construct the perfect password
Passwords are personal, secret, vital and too complicated to be guessed. That’s the theory. It seems that expert advice hasn’t complied with the complicated part. A report from the US Joint Task Force Transformation Initiative Appendix A set out password practices. In an article in the Wall Street Journal (WSJ), the author, Bill Burr, a former National Institute of Standards and Technology (NIST) manager, says his advice wasn’t right.
The original report in 2003 was NIST Special Publication 800-53 Revision 4 Security and Privacy Controls for Federal Information Systems and Organizations. It’s been updated regularly, and proposed password management should include:Changing passwords every 90 days Adding capital letters, numbers and symbols to words, such as password being Pa55?w0rd.
He now says passwords shouldn’t be changed frequently because people often make only small modifications, such as Pa55?w0rd to Pa55!w0rd. These changes weaken passwords when the intention’s to strengthen them.
A report by the BBC says a better method’s a random string of words, such as "pig coffee wandered black." It takes malware longer to break this code than using random guesses to find Pa55!w0rd.
Africa’s eHealth programmes and users can adopt this updated advice. They should also follow research on cyber-security practices. Complying with evidence-based actions is always best.
- 395 views
- November 10, 2017
- Tom Jones
UK’s NHS made illegal patient data transfer to Google’s DeepMind
As eHealth expands its reach, and Artificial Intelligence (AI) becomes routine, benefits will increasingly depend on health systems handing over their patient data to specialist companies. It seems inevitable, but it might not always be legal. The UK’s NHS found that it wasn’t.
An article in the UK’s Guardian says the Royal Free London NHS Trust, based in London, broke the law in November 2015 when it transferred 1.6m patient-identifiable records to DeepMind, the AI outfit owned by Google. It was part of a project where DeepMind’s built Streams, an app that provides clinical alerts about kidney injury. It needed the data for testing.
The ruling says by transferring the data and using it for app testing, the Royal Free breached four data protection principles and patient confidentiality under common law. It sees the transfer as not fair, transparent, lawful, necessary or proportionate. Patients wouldn’t have expected it, they weren’t told about it, and their information rights weren’t available to them.
The UK’s Information Commissioner agreed. Its view’s that the core issue wasn’t the innovation. It was the inappropriate legal basis for sharing data which DeepMind could use to identify all the patients. A better way’s to keep the data in the health system and interface with apps such as Streams only when a clinical need arises.
Two issues are important. One’s dealing with an apparent data-grab of millions of patient records by a global organisation. The other’s the way the NHS seems keen to embed a global company into its routing working. Both need regulating and protection of patients’ rights and interests.
These offer insights for Africa’s health systems to deal constructively with external eHealth and AI firms. The relationships are already on a trajectory. A lesson from the NHS and DeepMind project’s essential that Africa avoids being dragged along its wake. There’s still time to do it.
- 651 views
- July 07, 2017
- Tom Jones
Some mHealth need better privacy policies
Keeping people’s health and healthcare data is strict requirement even without eHealth. One change that eHealth’s achieved’s an increase in societies’ and people’s awareness of privacy’s importance.
A study by the Future of Privacy Forum (FPF), a US think tank, found that some mHealth providers don’t always see privacy like this.
FPF’s study shows then need for standards of best practices for health and wellness data. To move it on, FPF’s published Best Practices for Consumer Wearables and Wellness Apps and Devices. It’s a detailed set of guidelines for app developers to follow so they can provide practical privacy protections for health and wellness data generated by uses. The Robert Wood Johnson Foundation supported the initiative, which includes contributions from several stakeholders, including companies, advocates, and regulators. It provides essential privacy policies and requirements for Africa’s health systems to adopt as they expand their mHealth programmes.
The ten best practices are in three categories:
Consumer choiceOpt-in consent for data sharing with third partiesBan sharing with data brokers, information resellers and advertising networksOpt-outs for tailored first-party advertisingAccess, correction and deletion rightsEnhanced notice and express consent for incompatible secondary uses
Supporting interoperability (IoP)Compatibility with gold standard privacy frameworksSupports compliance with leading app platform standards
Elevating data normsSupports sharing data for scientific research with informed consentStrong re-identification standardStrong data security standards.
The guidelines can extend beyond countries existing eHealth, and specifically mHealth, legislation and regulation. For Africa’s health systems, where specific eHealth legislation and regulation is not developed, FPF’s guidelines provide an effective way of stepping it up.
- 550 views
- September 02, 2016
- Lesley Dobson
EC’s mHealth privacy code can meet Africa’s regulation needs
African countries recognise the need for privacy in eHealth. Many countries’ privacy regulations are for general data protection and may not be specific enough for all eHealth services. With mHealth being a major part of Africa’s eHealth, it seems to offer a good template to start to build up and apply eHealth regulations.
The European Commission (EC) offers a helpful starting point. Its draft Code of Conduct on privacy for mobile health apps has been completed. It’s derived from data protection law, and awaiting formal approval. When it’s attained, app developers can volunteer their commitment to comply with the Code.
A set of questions are suggested for completing a PIA. They’reWhich kinds of personal data will the app process?For which purposes will this data be processed?How have users’ consent been obtained to process their data for every type of use foreseen?Was someone designated to answer questions about the apps privacy requirements?Was the app developed in consultation with a healthcare professional to ensure that data is relevant for the app’s purposes and not misrepresented to users?Explain what’s been done to respect a set of security objectives, or explain why theyr’e not relevant:Principles of privacy by design and privacy by default:Data has been pseudonymised or anonymised wherever possibleAppropriate authorisation mechanisms have been built into the app to avoid unlawful access Effective encryption has been used to mitigate the risk of breachesIndependent system security audits have been consideredInform users when updated versions are availableBlocks all uses of old apps if the update is security critical App has been developed using known guidelines on secure app development and secure software developmentApp has been tested using mock data before it’s available to real end usersIncidents affecting remotely stored data can be identified and addressedIf any personal data collected or processed by the app is transferred to a third party, has appropriate contractual guarantees about their obligations been obtained, including purpose limitations, security measures, and their liability.
The Code is culmination of a wide range of contributions. It’s a very valuable contribution as best practice for attaining privacy in apps and for this aspect of mHealth regulation. App developers in Africa can enhance their products by showing how they’ve complied, even if countries haven’t incorporated them into eHealth regulations. These can follow on promptly if countries use the EC Code as their initial draft to prepare their bespoke versions.
- 463 views
- July 14, 2016
- Tom Jones
Google's DeepMind has UK's NHS patient data
Privacy, confidentiality and ownership are important issues for personal health data. The UK’s NHS has given Google’s DeepMind 1.6m patient records from three London hospitals as part of an Artificial Intelligence (AI) project to build Streams, an app to help hospital staff monitor kidney patients. An article in New Scientist expresses some unease and reservations.
It says the arrangement goes beyond an NHS data-sharing agreement and what was publicly announced. The arrangement also reveals a clear view of what the company is doing and what sensitive data it now has access to.
While the project is for kidney patients, data given to DeepMind from the Royal Free NHS Trust includes information about people who are HIV-positive, drug overdoses and abortions over the last five years. It also includes access to the Trust’s submissions to the NHS Secondary User Service (SUS) database that includes all hospital treatments, such as critical care and accident and emergency departments.
New Scientist says the data handed over suggests DeepMind has plans for a lot more that just the Stream app. MedConfidential, Sam Smith’s quoted as “This is not just about kidney function. They’re getting the full data.”
Google says all the data’s needed because there’s no separate dataset for people with kidney conditions. This implies that searches of codes such as International Classification of Diseases and the NHS Healthcare Related Groups (HRG) can’t provide routes into the information needed. A Trust Statement says it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”
DeepMind’s also developing Patient Rescue to provide a data analytics services to NHS hospital trusts. It’ll use data streams from hospitals to build other tools in addition to Streams. These are planned to provide real-time analyses of clinical data and support diagnostic decisions. Comparing new patients’ information with millions of other cases, Patient Rescue might predict if they’re in early stages of diseases not yet symptomatic.
While some people might be alarmed at the scale and scope of data-sharing for analytics, it’s on the increase. Oxford University’s Computational Health Informatics Lab has deployed computer learning tools across the four hospitals of the Oxford University Hospitals NHS Foundation Trust. As well as monitoring health of individual patients’ health, these systems can look for infectious disease outbreaks.
It looks like we’ll have to come to terms with our health data being shared. It might be more acceptable if it’s transparent, within regulations and that regulations are enforced. These are important lessons for Africa’s eHealth regulations as they develop.
- 697 views
- May 17, 2016
- Tom Jones
Would you fight to protect your data?
Most of us agree: data privacy is a high priority for eHealth. But what each of us should do to protect it is more difficult to answer. An Austrian law student has a suggestion: challenge the biggest company you believe is not protecting your data and win.
It’s taken Max Schrems three years of legal battles, but now Facebook’s European privacy practices are to be investigated by the Irish data protection watchdog. The October 2015 ruling overturns a previous decision by the watchdog that was premised on a safe harbour agreement which was recently declared invalid by the European Court of Justice (ECJ) after another, separate, two-year case by Schrems against Facebook.
An Irish Guardian article says the high court in Dublin quashed the Irish data protection commissioner’s original refusal to examine Schrems’ complaint over the alleged movement of his data outside of Europe by Facebook after referring the case to the ECJ.
For fifteen years a safe harbour agreement deemed European citizens’ data transferred between the EU and US as being adequately protected, allowing US companies to self certify their data protection practices. Not anymore. Judge Gerard Hogan describes it as a landmark challenge, which led to the most important ruling of the ECJ in years that “transcended international law … The commissioner is obliged now to investigate the complaint … and I’ve absolutely no doubt that she will proceed to do so,” Hogan said.
Facebook doesn’t seem happy. It said “We will respond to inquiries from the Irish Data Protection Commission as they examine the protections for the transfer of personal data under applicable law,” reiterating that it does not give the US government direct access to its servers and it does not recognise the National Security Agency ’s (NSA) Prism surveillance programme.
There are lessons for African countries. As we explore our obligations and global good practice to protect data, there are challenges. One is the risk of assuming protecting can be implied, without addressing the details. Another is the value of acting regionally to ensure protections.
Schrems said that watchdogs in 28 European states will now be able to accept complaints about the movement of personal information and that he was considering other challenges to tech giants involved in cloud services. He said, “The court has been very clear a new safe harbour would have to give you the same rights as you have in Europe. That’s going to be hard to get a deal on.”
Questions remain for Africa, such as, what’s the right starting point for tackling these issues so that African countries don’t have to deal with them alone. Could it be through the African Union Commission (EAC), or Regional Economic Communities (REC)? Are new regulatory bodies needed?
Either way, tackling the issue soon is probably a good idea and cooperation between governments and companies might be sensible. Somewhere among African students there may be someone in the mood for a lengthy legal battle. They may have already set their sights on a multinational company with questionable data protection policies.
- 556 views
- November 25, 2015
- Sean Broomhead
Some of England's NHS apps aren't secure
Like fish and chips, smartphones and apps belong together. To ensure a harmonious relationship, England’s NHS accredits apps for people to use with a degree of confidence. They’re listed in the NHS Choices Health Apps Library, which tests them to ensure they meet clinical and data safety standards. The apps mainly help people lose weight, stop smoking, be more active and drink less alcohol.
A report in Biomed Central says a study team at Imperial College London reviewed 79 apps and found that this assurance isn’t complete. Some apps, 23 in the review, don’t comply with privacy standards and send data without encrypting it. Some of these apps have been taken off the Library’s list. It raises the questions of why and how they were on the list. NHS England’s piloting a new, more rigorous accreditation process.
For Africa’s increasing emphasis on mHealth, it shows that accreditation must be rigorous and ensure compliance with recognised standards. It may not be as easy as it seems. An audit of the accreditation’s essential.
- 1,327 views
- October 02, 2015
- Tom Jones
Can de-identification maintain privacy?
As databases about patients, analytics and Big Data for secondary uses expand in healthcare, protecting patients’ privacy is becoming increasingly important and challenging. Privacy Analytics’ recent white paper, De-identification 201. It’s part of Privacy Analytics De-Id University.
Anonymising data it the goal of de-identification to ensure that data used beyond its primary role can’t be matched to the people it describes. This protects their privacy. The main dilemma is the trade-off between maximising privacy and maximising data’s usefulness. It’s challenging to achieve because removing patients’ names and other direct identifiers, such as social security numbers, from a dataset isn’t sufficient to achieve de-identification. Indirect data, called quasi-identifiers, such as age, birth dates and post codes are left in place, and when they’re combined, they can be used to identify individuals.
Two types of de-identification standards are important to protect privacy; safe harbour and expert determination. Safe harbour is easy to use, but has a few drawbacks because some data can be lost. Expert determination relies on experts to use and retains data’s scope, and retains data. For both standards, it’s important that standard users know how the data for secondary purpose will be used.
De-identification’s a technique alongside masking. Their main difference is in scope. De-identification can anonymise quasi-identifiers. Masking anonymises direct identifiers, and relies to a large extent on techniques that remove data, so may reduce the data’s usefulness.
Privacy Analytics is clear that expert determination is a risk-based approach because the level of de-identification depends on a risk assessment of its use or disclosure. In addition to this advice, the white paper includes a valuable appendix of terminology. Privacy Analytics has numerous white papers that provide important insights for Africa’s eHealth regulators and data managers. It’s a valuable reference site.
- 918 views
- August 06, 2015
- Tom Jones
How much privacy are we entitled to?
Does your health status affect me? If the answer’s yes, then I have the right to protection. How should I be protected without infringing your rights to privacy and confidentiality? These questions may need new conversations, involving wider stakeholder groups, than most health systems have managed so far. They're likely to reveal circumstances in which rights to sharing information could outweigh rights to privacy.
A recent African example is the Ebola outbreak, which strained communities’ abilities to maintain patients’ privacy and confidentiality. There are less extreme, though equally devastating examples, in which my knowing your health status, whether you like it or not, might be reasonable to protect me, particularly if your job means my life is in your hands.
Public transport has had its share of tragedies. Many result in the loss of many lives. Examples are bus accidents in South Africa, train collisions in India and elsewhere, and the recent aircraft crash in the French Alps. It seems reasonable to suggest that if buses, trains and planes are regulated properly, then their drivers and pilots should be too. Regulating them would necessitate opening up some aspects of their health records to new types of scrutiny and information sharing.
It’s more than three weeks since the tragedy when flight 9525 went down in France and all 150 on board lost their lives. It’s been a harrowing time for their families. French prosecutors now know that co-pilot Andreas Lubitz had been under treatment for severe depression. He had been seeing at least six doctors, who had prescribed a wide range of medication. A bewildered flying public is beginning to ask questions about how, with today’s modern technological connectedness, it was possible that the doctors did not know he was doctor hopping, and that his employer did not know of the extent of the risk and disability caused by his mental illness. The answer may be relatively simple; that current privacy and confidentiality laws and good practice limit the amount of sharing that’s possible, between doctors and from doctors to employers, particularly for certain types of diagnoses, including psychological conditions.
The technical solution provides a pertinent example of eHealth’s potential and it’s challenges. While a well-connected health information infrastructure, supporting effective levels of interoperability, is technically achievable and will make meaningful sharing possible, there are tremendous human barriers in the way of it being effectively used. Confidentiality is but one highly emotive one. Dealing with these barriers needs robust stakeholder engagement to identify acceptable and appropriate sharing principles. It also needs a strong regulatory framework to apply them and support employers and employees to manage their relationships effectively to protect the public.
We can hope that the scale of this unexpected shock will help us to re-imagine practical solutions to these challenges.
- 1,107 views
- April 16, 2015
- Sean Broomhead
Healthcare privacy not a big concern for US patients
Privacy and security around personal health data has been an area of concern for years. An article in BECKERS HEALTH IT & CIO REVIEW says a new poll from Truven Health Analytics and NPR Health, shows that health information privacy isn’t a big concern after all.
A survey of 3,000 adults found that few people were concerned with their health data privacy. Only 11% of respondents expressed privacy concerns about health records held by their physicians. Some 14% had privacy concerns with hospitals and about 16% had concerns with health insurance companies.
Americans are more comfortable sharing their health information than their social media or credit card purchase information. About 78% said they wouldn’t be willing to share their credit card purchase history and social media activity with physicians, hospitals and insurers, regardless of the planned use and even if it could help improve their healthcare.
Research from the Ponemon Institute found a similar attitude. People are worried about security and privacy in general, but medical record privacy ranks near the bottom of their concerns.
An intriguing feature of these findings is that healthcare records contain some details of patients’ financial profiles, such as social security numbers which are often the target for cyber-crime attacks in the USA. It’d be interesting to know if USA citizens attitudes to privacy and security of this data would be greater than their health data.
Could this relaxed attitude also be true for patients in Africa?
- 239 views
- December 12, 2014
- Lesley Dobson
Rome Business School eHealth Masters
Better managers for a better world
Health Information Systems South Africa (HISP-SA)
HISP-SA develops and implements sustainable, integrated Health Information Systems that empower communities, patients and healthcare workers.
Cyber-security: themes for Africa's eHealth
Cyber-crime is on the increase, and Africa’s health sector is not immune to the growing and changing threats.
African eHealth Forum 2016 Report
Successful eHealth connects us with our information in ways that transform what we expect from our health...
African eHealth Forum 2015 Report
Acfee hosts the annual African eHealth Forum. It’s where Acfee’s Advisory Board, its industry partners...