Man with glasses looking at a screen

eHealth – the use of technologies in support of health – has long offered the tantalising opportunity of revolutionising how people manage their health. 

On an individual level, it enables greater insights into our daily practices through smart technologies; for medical and health practitioners, access to a wider body of information to inform diagnostic decisions; for organisations, the ability to create data processing and management efficiencies; and for researchers, the ability to process and aggregate data at a previously unimaginable scale to analyse and identify relationships and obtain new insights.

The pervasive nature of technology at the personal-level combined with the opportunities afforded by Big Data, Artificial Intelligence and High-Performance Computing provide a connected ecosystem of sensor-rich environments able to process data at population-level scale.

In many respects, the explosion of information and the use of technology to exploit data is not unique to eHealth. What is unique is the increasing access and granularity of such data and highly personal nature. Care must be taken not only in securing the information, which history has shown to be easier said than done, but also to ensure people are able to reliably and knowledgeably control the data through its complete lifecycle. 

Regulation and legislation provide much needed enforcement and compliance on an organisational level, but this results in siloed protections around organisations, not individuals. From an individual’s perspective, there is little understanding of the holistic use of data. How would someone understand what and how their personal data is being used for and by whom? 

This issue is further compounded by the volume of services individuals interact with, use of third parties within the delivery of services, the complexity of the underlying technology and the passage of time. Arguably, technology has to have a far greater role in managing and securing data for individuals.

Whilst cyber security is still immature in many respects, at its core, several principles have stood the test of time. Principles such as ‘least privilege’, ‘default deny’, ‘separation of duties’ and ‘security/privacy by design’ all help in the creation of secure software and systems. However, there is significant evidence that systems are designed with little or no regard to these. The Internet-of-Things (IoT), which can have a huge positive impact on eHealth, has shown to be particularly insecure[1],[2] - even those that purport to be security devices[3],4. In some cases, they are compromised with relatively trivial attacks. 

This suggests the lessons of the 1980s and 90s, when organisations were retrofitting security into products to counter the new threats from the growing popularity of the internet, have not been sufficiently learnt. Today, they are still focussing on function and prioritising being first to market over security.

However, it is not always clear on what the best strategy is to secure systems. The world-wide effort in designing contact-tracing apps produced two diametrically opposed approaches of centralised and decentralised – with the latter offering up greater levels of privacy over the former’s capability to enable a greater level of information and data to better understand the nature and propagation of the disease[4].

Fully understanding the requirements of such systems is one critical reason why security must be considered at inception. This ensures a solution is functionally capable while meeting all stakeholder’s security and privacy expectations. To do otherwise can have severe implications for adoption and acceptance. That said, even if technologies are designed with the traditional cyber security principles in mind - which would be a great first step - this would unlikely resolve the issue, and particularly so within eHealth. It is essential to fully consider the human aspect.

eHealth technologies are not targeted at tech-savvy professionals capable of understanding and interacting with it, but need to be developed with society as a whole in mind and its rich variety of capability, knowledge and willingness. 

The technologies, and the security mechanisms designed within, need to provide the flexibility and adaptability to allow individuals to make informed decisions on how to use it. The traditional approach of designing a security mechanism as a ‘one-size-fits-all’ solution will simply not hold true if users are to widely adopt and accept the technology.

Take user authentication – generally a username and password - as an example. There are countless studies demonstrating the ineffectiveness of the approach, key amongst these being a person’s ability to remember the string of alphanumeric characters. Moreover, whilst this is likely to impact us all given the number of services we now utilise and need to remember credentials for, it will have a significant impact on individuals suffering from dementia and other cognitive-limiting diseases. 

There is also a growing proportion of people being digitally left behind due to the demands and complexities of the technologies[5].

Focus now needs to shift to the development of security technologies that seek to mitigate the barriers and challenges that can exist between the interaction of people and cyber security. For example, frictionless authentication seeks to non-intrusively and continuously verify the authenticity of a user. Through capturing biometric identifiers during normal interaction with the device, such as capturing a face for facial recognition whilst typing a text message, the burden upon the user having to remember credentials or even interact with the authentication mechanism is removed. 

Furthermore, by capturing this transparently, the security mechanism need not only capture at the point of entry as is typical, but continuously verify the user – providing increased levels of security with reduced levels of user inconvenience[6].

Technology complexity within eHealth is such that even the most competent individuals will struggle to understand what is happening with their data, let alone know how to control, secure and provide informed consent. To build trust - with the knock-on effect of improving adoption and resulting in better health outcomes – it is imperative that solutions are sought to ease, not increase, the burden upon the user whilst also providing sufficient awareness of what is being collected, how it is being processed and what it enables.

Professor Nathan Clarke

Professor Clarke is a Professor of Cyber Security and Digital Forensics and Deputy Head of School of Engineering, Computing and Mathematics
and has over 20 years of research experience in the areas of information security, biometrics, forensics and intrusion detection. He has over 200 outputs consisting of journal papers, conference papers, books, edited books, chapters and patents. He is the Chair of the IFIP TC11.12 Working Group on the Human Aspects of Information Security & Assurance.

A chartered engineer, he is a fellow of the British Computing Society (BCS) and a senior member of the IEEE. He authored 'Transparent User Authentication' – the first of it's kind – that examines the problem of user authentication from a new and groundbreaking view point.

Find out more about Professor Clarke

Professor Nathan Clarke

The Old Normal: Our Future Health 

The Centre for Health Technology brings together researchers with over 30 years of evidence-based research experience in health and technology. Together, they work to enable innovative healthcare solutions that reduce the pressure on services, support healthy ageing in our communities and stimulate an economy of wellbeing that benefits all. 

In this series, they share their views on the current state of health and care in the UK, and what its future could look like.

Elderly woman looking hopeful

Associated publications

References

[1] Sarid, U., (2019). “IoT obscurity is IoT insecurity”. Diginomica. 

[5] Bailey, D., Perks, M., Winter, C. (2018). “Supporting the digitally left behind”. Ingenia Online.

[6] Clarke, N. (2011). Transparent User Authentication. Springer.