top of page
  • Writer's pictureJeremy Crooks

Data and the Digital Self

This article is a summary of the 116 page Australian Computer Society White Paper on Data and the Digital Self.


Extracts from the White Paper


Smart Cities

There is an ongoing focus on making our cities, towns and places ‘smart’, which is underpinned by the use of data. Our response to COVID-19 clearly demonstrated the value of access to data for rapidly responding to the pandemic. We needed to understand the impact of travel and other lockdown restrictions on critical services and critical infrastructure – from ensuring nursing homes remained safe to ensuring power stations remained in operation. This required a new capacity to understand a city in near real time, and to have realistic, data-driven models that allowed options to be explored.


[Big Data] requires us to make our services smarter, whereby we can understand in fine detail what is happening in a city or community, identify root causes of problems, and even be able to predict when things will go wrong and plan adaptive scenarios to respond to changing needs. This relies heavily on access to data, and this access to data will be influenced by technology trends around the creation of and use of this data. It also opens up questions on privacy, security and consent


Privacy

Inescapably, however, any data collected about people directly – their actions, location, environment or any aspect of the context they operate in – has some aspect of what may be regarded as personal information, even if ‘de-identified’ to remove unique identifiers. If the datasets used for these purposes are linked and analysed to provide sophisticated, personalised services, a great deal of personal data (PD) or personal information (PI) may be contained in the joined data, possibly sufficient to re-identify the individuals represented therein. (If datasets are linked and analysed to provide rich new services, a great deal of PD or PI may be contained in the joined data, possibly sufficient to re-identify the individuals represented therein.)


With 5G came the ability to reliably monitor and operate remote devices, to stream multiple channels of high-definition video, or drip-feed a few bits at a time between millions of sensors measuring anything from soil moisture to air quality. The race towards developing 6G by 2030 has begun. It will underpin widely anticipated future services – from immersive entertainment to secure and reliable operation of autonomous devices, and from monitoring personal health devices to securely and reliably managing driverless cars, trains, ships and aircraft.


The third important [6G] element is widespread use of AI to make sense of the rising tide of data to generate insights, to spot anomalies and to locally optimise systems. AI can also be used to augment human systems through direct human engagement (personalised assistants) or as intelligent autonomous mechanical systems (classical robots).


Prosumer: a consumer who becomes involved with designing or customizing products for their own needs.


Multidirectional energy flow from prosumers creates significant new technical challenges, the need for data interchange between multiple stakeholders, and the need to assure the system will operate correctly in a wide range of new situations. This is true of any newly ‘smart’ system, from telecommunications to smart traffic. The active component in these smart systems has greater potential to impact the outcome and become a point of vulnerability in a system


Digital Twins:

A digital twin is a virtual representation of a physical object or process; these are used in fields as varied as manufacturing, health, entertainment, construction and geological modelling. The digital twin connects to the physical object or process through a constant stream of updates (possibly in real time), which maintains the consistency of the physical and real worlds.


AI will bring greater personalisation of services, augment decision-making, and be used as a frontline defence in escalating cyber security challenges. AI will also help entertain and educate, help identify anomalies in the digital and physical world, optimise systems, and lead to ever greater use of data.


One of the major considerations for a future world is the extent to which AI and automation are used. Acceptance of automation is framed within a wide set of concerns including unintended consequences of automated decision-making, the need for human judgement in the decision-making process.


The ‘smart’ in services comes from the ability to access and analyse data to improve situational awareness, understand or even predict root-cause challenges, deliver high-value new services and explore different possible scenarios.


Legal Implications of Data

Online Price discrimination is commonplace, but it can be concerning and disempowering for the consumer if it lacks visibility or a rationale, especially when higher prices are imposed on historically disadvantaged groups. a purpose of personalisation is to treat people differently on the basis of their characteristics, raising challenging questions about which characteristics can justifiably be used for different kinds of personalisation.


Australia's Identity-matching Services Bill 2019 and the Australian Passports Amendment (Identity-matching Services) Bill – would deploy facial recognition technology to match separate identity files including across federal, state and territory jurisdictions and to a more limited extent with the private sector. there is the potential for significant amounts of data to be collected and retained, potentially including live feed CCTV and images from social media. This level of data collection about individuals and data sharing by the state can turn sinister, and rapidly slide into mass surveillance of a populace, the vast majority of whom are under no suspicion. this carries risks to autonomy and the risk of manipulating behaviour. People who know they are being watched may also feel compelled to change their behaviour.


The more data is linked, the more fine-grained analyses, decisions, and even interventions governments can make, in relation to particular identified individuals or, more likely, in relation to non-identified subgroups of people based on their shared characteristics. There are genuinely difficult questions about the extent to which we want governments making predictions about the life trajectories or likely activities of individuals, let alone intervening, by offering or refusing access to services based on those predictions.


[Data Surveillance] can also lead to perverse outcomes: as where people change their behaviour (who they socialise with, what they say on social media) in order to optimise their treatment by government or private sector organisations.


As a result of COVID-19, both governments and private sector actors have rapidly accelerated data sharing and linkage; the application of data analytics to highly sensitive health data; and innovations in all kinds of surveillance to increase awareness of where people are, how they are moving around, and who they are coming into contact with – on an aggregate and individual basis. Millions of Australians proved potentially willing to reveal who they’ve spent time with by downloading COVIDSafe, and millions more have provided a detailed map of their locations via QR code check-ins.


Social Media use data about what you just said on social media to decide how you are feeling, and target ads for products we know people buy when they’re feeling that way, as well as stories that will keep you on the platform longer’; or ‘we will infer your political leanings from what charitable causes you support and what news sources you read.


Discriminatory Use of Data:

What many fear about automated processing is nor that personal information will leak out but rather that they will be treated differently from others for arbitrary reasons. Even in Australia, it can be argued that differential treatment relying on data-driven inferences can be rationally justified. So what we are describing here is just one possible interpretation of the rule of law, one designed to ensure a measure of protection for individuals, perhaps at the expense of rational scientific management of populations.


The fear is that technologies that use large-scale data analysis to categorise and characterise individuals, and intervene to alter their behaviour, risk ‘subordinat[ing] considerations of human well-being and human self-determination. Forecasting how people will act based on their similarities with others – and intervening to affect those choices before they have been made – necessarily involves treating the person not as an individual but as a kind of object.


Trust Considerations of Data

AI and algorithms have replicated or even exacerbated inequalities in the ways that different demographic groups are treated. What’s more, if left unchecked, the assumptions and values embedded in these technologies and the decisions they enable can become baked into the infrastructures that drive subsequent knowledge practices. Data does not speak for itself but, rather, is given a voice by the people and the algorithms that play increasingly critical roles in the transformation of data into insight


Local and international studies report that societal leaders are not trusted to handle challenges. An Australian election study yielded similar results, with 25% of respondents in the 2019 survey stating they believe people in government can be trusted, compared to 51% reported in the 1969 survey. Data policy concerns now range far beyond the scope of rights or interests of citizens to go about their private lives, including in public and semi-public places, without unjustified or unexpected collection and uses of data.


We need new thinking on the purposes of privacy policies and collection notices. Many regulated entities consider privacy risk management as another exercise in form over substance, only providing ‘transparency’ through buried and opaque disclosures. [Protection] proposals include substitution of tracking codes and device codes for what is variously called common ID, stable ID or universal ID. Universal ID proposals claim to provide a means by which the identity of a user, internet device or browser can be protected against being reverse engineered to a form of identification of a user.


Ethical Considerations of Data


Reforms of the Privacy Act will not address many concerns about harms to individuals, or to society, potentially arising from applications of new technologies and advanced data analytics. Artificial intelligence (AI), machine learning (ML) and other algorithmic inference engines, and collection of non-traditional data (for example, through IoT devices and other smart cities and smart infrastructure applications) also give rise to concerns that should be addressed by responsible innovators.


Adoption of AI and automated decision-making by organisations is often accompanied by significant changes in decision-making processes within organisations, creating risks of over-reliance and opacity as to why decisions are made.


Adoption of AI and automated decision-making can be accompanied by significant changes in the structure of technology supply chains, including increases in supply chain complexity and the reliance on third-party providers. Focus upon AI outputs risks creating a frame of review that underestimates or ignores how humans using AI may rely upon AI outputs to effect outcomes that are not fair, socially responsible, reasonable, ethical or legal. The use of AI and automated decision-making can be accompanied by an increased scale of impacts when compared to conventional ways of performing business tasks. When things go wrong, unintended consequences can be very significant and very rapid.

bottom of page