Skip to content

Antony Antoniou Uncensored

Why we must object to compulsory digital ID and digital currency

Why we must object to compulsory digital ID and digital currency

An urgent, comprehensive case against surrendering privacy, autonomy and democratic safeguards for convenience

Digital identity systems and state-issued digital currencies are presented today as progress: modern, efficient ways to access services, cut fraud, speed payments and bring more people into the formal economy. But the crown-jewel convenience of a single digital identity tied to a state-issued digital currency carries far deeper consequences than most citizens realise. When identity and money are both digitised and interoperable, the effect is to concentrate enormous power in the hands of states and corporations — power to observe, to control, to exclude and to reconfigure everyday life.

This article explains, in plain UK English, why people must object to the introduction of universal or compulsory digital ID systems and central bank digital currencies (CBDCs) that are linked to identity. It sets out the concrete risks to liberty, privacy, economic freedom and democracy; it highlights real-world examples and authoritative analysis; and it concludes with practical demands and alternative safeguards citizens should insist upon if any rollout is contemplated.

Short summary (so you know the headline risks before you read on)

  1. Mass surveillance of our lives. Linking identity to transactions creates a permanent, searchable record of what we buy, where we go and who we associate with.
  2. Programmable money = conditional freedom. Digital currency can be programmed to permit or deny spending, creating new levers for social and political control.
  3. Exclusion and discrimination. People who cannot, will not, or are excluded from digital systems risk being locked out of basic services.
  4. Centralised failure and cyber-risk. Central databases are irresistible targets for hackers; outages could deny people access to their money.
  5. Commercial exploitation and data monetisation. Private firms will always seek to monetise the rich behavioural data a combined system generates.
  6. Erosion of democratic safeguards. Emergency measures and “temporary” policies can become permanent; due process can be side-stepped by technical controls.

If you object to giving institutions — public or private — the power to surveil, gatekeep and program the conditions of living, you must object to universal digital ID and identity-linked digital currency as presently being proposed.

What I mean by “digital ID” and “digital currency”

For clarity:

  • Digital ID means a government- or privately-run system that identifies people in digital environments: verified identities that can be used for logging into services, proving age or nationality, accessing benefits, boarding planes, voting online, or transacting. These systems often tie biometric data, government records and private data together.
  • Digital currency / CBDC means legal tender issued directly by the central bank in digital form — a state-backed electronic token or account that functions like cash but exists only on networks. When a CBDC is identity-linked, every unit can be traced back to an identified person or account.

Both are technically feasible — and both are being actively explored by governments worldwide. But feasibility does not imply harmlessness.

1. Privacy: the most obvious and enduring casualty

Cash is private. Cash transactions create no centralised ledger linking buyer to purchase. Even if cash is imperfect, its anonymity protects ordinary life from permanent logging.

Digital ID + digital currency, especially when linked, destroys that anonymity. Governments (and any institution with access) can build a comprehensive, searchable record of who bought what, where, when and with whom they interacted. That’s not hypothetical: the privacy implications of CBDCs and identity systems have been repeatedly flagged by data-protection authorities and independent experts. Policy makers themselves recognise privacy is central to the design task because the digital nature of these systems makes data aggregation trivial. (European Data Protection Supervisor)

Why this matters in practice:

  • A single dataset can reveal sensitive patterns — health-related purchases, attendance at political meetings, donations to charities, religious observance, relationships and travel.
  • People change behaviour when they know they are watched. A permanent record chills freedom of association, political donations, whistleblowing and intimate choices.
  • “Anonymised” data is often deanonymised — identities can be reconstructed by correlating datasets. Centralised data is rarely as anonymous as promised.

If privacy is treated as an afterthought, these systems become surveillance machines in short order; that is why privacy must not be treated as optional.

2. Programmability: when money comes with strings attached

One of the touted advantages of digital money is programmability. Programmable features let authorities or issuers deliver targeted welfare, automate subsidies, or stamp out certain crimes. But that same technical capacity can be used to restrict spending — to say what you may or may not buy, when you can spend, or even to claw back funds if certain conditions are unmet.

Legal scholars and policy analysts have already mapped the legal and social consequences of programmable money and warned that it opens a path to conditional citizenship: benefits that can be spent only on certain items, allowances that expire, or price signals that penalise behaviour. Programmable money is not a neutral technical upgrade; it is a lever for social engineering. (Penn Carey Law Scholarship Repository)

Practical dangers:

  • Behavioural control: Governments could restrict spending based on behaviour (e.g. denying access to certain goods after a court order, or tying welfare to compliance with administrative requirements).
  • Political weaponisation: In politically fraught contexts, access to funds could be used to punish dissent — suspending or limiting an activist’s digital wallet for engaging in protest.
  • Permanent rationing: During crises, authorities might impose limits on types of purchases, shepherding citizens into certain behaviours under the guise of “fair distribution”.

The risk is not that these possibilities exist only in dystopian fiction; they are feasible design options. Without robust legal guards, programmable money is a policy tool with immense potential for abuse.

3. Exclusion: the digital poor and the unwilling left behind

Digital identity programmes are often sold as tools of inclusion: they let people open bank accounts, claim benefits and prove identity remotely. But in practice, digital-first systems frequently exclude the most vulnerable.

Reasons for exclusion include lack of technology (no smartphone or stable internet), distrust of government systems, lack of identity documents to begin with, disabilities that impede use, literacy barriers, or simply a personal or political objection to being enrolled in centralised registries. When services — including essential goods, healthcare or the ability to receive wages — move behind a digital ID and digital currency, those without access are cut off. International analyses warn that insufficient attention to inclusion risks turning an enabling technology into an exclusionary gatekeeper. (Open Government Partnership)

Consequences:

  • People reliant on informal economies (cash, bartering) may be blocked from participating in the formal economy if cash is deprecated.
  • Elderly, rural and marginalised communities are disproportionately affected.
  • Opt-out options that exist on paper may be functionally meaningless if the ecosystem — shops, employers and public services — move to digital-only operations.

In short: inclusion rhetoric must not mask real-world exclusion, and systems must not be permitted to functionally coerce citizens into participation.

4. Concentration of power and mission creep

A national identity database linked to payments confers an extraordinary amount of information and control on whoever operates it. That concentration of power has two dangers:

  1. Mission creep. Programs introduced for convenience or fraud prevention are often expanded to new purposes. What begins as an optional digital driver’s licence or a pilot wallets programme can become a de facto mandatory system for accessing benefits, travel or jobs. Recent domestic debates over government digital-wallet initiatives show exactly this fear: critics worry that “voluntary” apps become effectively mandatory because services make them the easiest path to access. (The Guardian)
  2. State–corporate fusion. When governments partner with big tech firms to build and run identity and payments infrastructure, the system empowers not only the state, but private corporations with intimate knowledge of citizen behaviour. Commercial incentives (targeted advertising, credit-scoring, product placement) are strong; these firms will naturally seek to monetise any data streams they can access.

History shows that once centralised systems exist, they are rarely limited to their initial scope. Democratic safeguards — sunset clauses, strict legal limits on secondary uses, and meaningful oversight — are hard to impose and harder still to enforce across the lifetime of technology infrastructure.

5. Cybersecurity, resilience and the fatal attractiveness of the centralised target

Centralised ID/payments systems are high-value targets. A central database linking identity to transactions is a single point that, if compromised, can expose masses of sensitive information. The operational risks are large: outages, successful cyberattacks, insider abuse, software bugs or supply-chain vulnerabilities could all disrupt basic economic life.

Analysts warn that CBDCs and digital identity systems introduce new operational vulnerabilities for the financial system and critical services. The more essential the digital system, the more costly an outage or breach becomes for ordinary people who depend on it for salaries, pensions and everyday payments. (Atlantic Council)

Real risks include:

  • Mass theft of personally identifiable information and transaction histories.
  • Denial of service events that make payment infrastructure unusable.
  • Bank-runs or liquidity shifts, if citizens move funds en masse into state-issued tokens during periods of fear.
  • Dependency without redundancy, if cash and alternative mechanisms are removed.

Redundancy (keeping cash and non-digital options available) is not a minor technical detail — it is fundamental to resilience. Removing cash as a fallback increases systemic fragility.

6. Commercial exploitation and the monetisation of life

Digital identity and payments data is perhaps the most valuable behavioural dataset imaginable. If private firms gain access — directly or through opaque public–private partnerships — that dataset will be monetised for advertising, dynamic pricing, risk-scoring and countless other profit-making purposes.

Civil-society groups and privacy experts warn that identity-linked financial data can be abused to produce discriminatory credit scores, insurance premiums, personalised price gouging, or employment decisions based on spending patterns. In jurisdictions with weak regulation of data brokers, personal data markets will rapidly cannibalise privacy for profit. The Electronic Frontier Foundation and others caution strongly that digital identity systems must not be treated as data-harvesting platforms. (Electronic Frontier Foundation)

Consequences for citizens:

  • Targeted discrimination in credit, insurance and employment.
  • Behavioural shaping through personalised offers and nudges.
  • Loss of bargaining power as every purchase becomes another datapoint used to predict and manipulate choices.

Citizens should be sceptical of any proposal that implicitly funnels citizen data into profit-making ecosystems without absolute, enforceable restrictions.

7. Threats to democracy and civil liberties

The private, frictionless flow of money underpins many democratic activities: funding political campaigns, supporting independent media, donating to causes and paying for organising costs. When that flow is recorded in fine detail and can be blocked or reversed by authorities or platform owners, it becomes possible to strangle dissident voices without due process.

Human-rights groups have repeatedly flagged that CBDCs could amplify financial surveillance and civil-asset risks. There is a real danger that account suspensions or automated flags could be used to silence journalists, activists or minority communities under pretextual security claims. The danger is not only authoritarian states; it exists in democracies where administrative measures are used for political ends. (CBDC Tracker)

Concrete scenarios to worry about:

  • Pre-emptive freezes of activists’ funds during protests.
  • Choking funding channels for independent media and civil-society organisations.
  • Automated deplatforming, where algorithmic rules cut off service without human oversight or timely appeal.

Democracy requires breathing room for dissent and plurality: financial controls that can be applied in real time and at scale threaten that breathing room.

8. Legal and due-process weaknesses

Technical controls are easily misapplied without strong legal protections. If authorities can freeze or restrict access to digital money without judicial oversight, the balance between state power and individual rights shifts decisively in the direction of the state.

Several authorities and academic groups argue the legal architecture for CBDCs must protect privacy, ensure human oversight of restrictive measures, and provide redress mechanisms. In many jurisdictions, these protections do not yet exist or are weak. Without them, the introduction of identity-linked digital currency could outrun the legal safeguards citizens rely upon. (BfDI)

Requests citizens should make now:

  • Clear statutory limits on freezing or restricting funds.
  • Independent oversight and appeal rights for anyone affected by automated decisions.
  • Transparency reports and audit rights for how data and controls are used.

Technical systems are not substitutes for constitutional and statutory guarantees.

9. The slippery slope: normalising surveillance through convenience

Technological rollouts often begin with voluntary pilots and promises. Convenience — a single app to replace many cards — is seductive. People gradually come to accept and depend on new tools; the social cost of rejecting them rises until non-participation becomes virtually impossible.

This “normalisation” dynamic is dangerous because it changes the social baseline for acceptable governance. A system introduced for welfare distribution this year may be expanded next year for travel or voting, and continued expansion is defended as “efficiency” or “public safety”. Critics have repeatedly warned that digital-ID schemes, presented as optional, can become de facto mandatory when ecosystem actors make them the default interaction channel. The Guardian’s recent coverage of concerns about a government digital wallet highlights exactly this pathway from voluntary convenience to quasi-mandatory practice. (The Guardian)

Citizens should therefore treat initial “pilots” and voluntary schemes with healthy scepticism, demanding legal guarantees before any growth.

10. International implications: surveillance beyond borders

Identity-linked digital currency would allow, in principle, cross-border tracing of transactions and identities. That opens the door to transnational data sharing and cooperative surveillance arrangements that are not controlled by any single democratic electorate. The geopolitics of digital currencies — who hosts the infrastructure, which firms supply the technology, and which jurisdictions have legal access to datasets — becomes a strategic national-security question.

Even in democracies, international sharing agreements could expose citizen data to foreign governments or corporations. The stakes are not merely domestic: they are global.

11. Examples and precedents to watch

  • Government digital-wallet initiatives: Debates in several countries show how voluntary digital ID schemes can rapidly become controversial when civil-liberties groups fear mission creep. Recent reporting on national digital wallet pilots highlights the danger that “optional” becomes default. (The Guardian)
  • Academic and policy warnings on CBDCs: International bodies, privacy authorities and think tanks have flagged privacy, resilience and democratic risks in numerous studies. These are not fringe concerns; they are mainstream expert warnings that merit public attention. (European Data Protection Supervisor)
  • Civil-society analyses of digital ID: NGOs working with marginalised communities report that badly-designed ID systems create exclusion and increase vulnerability to abuse. (immigrantdefenseproject.org)

These examples illustrate that the theoretical risks outlined above already have concrete manifestations and counter-examples in policy debates.

12. Counterarguments and why they fall short

Supporters of digital ID and CBDCs often make the following claims. Below are those claims — and why they are insufficient reasons to surrender liberty without robust safeguards.

Claim: Digital IDs prevent fraud and make services more efficient.
Response: Fraud prevention is worth pursuing, but efficiency gains do not automatically justify building systems that centralise control. Fraud can be reduced through narrowly tailored, auditable measures that do not create universal surveillance. Efficiency is not a trump card.

Claim: CBDCs will help the unbanked and modernise payments.
Response: Inclusion must be judged by outcomes, not promises. Many people rely on cash by choice or necessity. If CBDC deployment reduces cash availability, inclusion claims will ring hollow. Moreover, infrastructural dependence creates power asymmetries that can be exploited.

Claim: Privacy features can be built into designs.
Response: While privacy-by-design is possible in theory, history shows design choices are often dictated by political priorities. Privacy guarantees must be enshrined in law, not merely left to technical standards that can be overruled by future policy changes. Independent audits and enforceable constraints are indispensable.

Claim: Centralised systems are easier to regulate and therefore safer.
Response: Centralisation concentrates risk and power. Regulation must therefore be robust and enforceable beforehand; relying on future regulation after deployment is a recipe for mission creep and abuse.

These counterarguments do not invalidate the benefits that digital technology can bring, but they demonstrate that benefits do not negate the very real, structural risks.

13. What must citizens demand — minimum safeguards before anything is adopted

If a government insists on exploring digital ID or CBDC options, citizens and civil-society groups should insist on the following non-negotiable safeguards:

  1. Cash must remain legal tender and readily available. No technology that removes the practical possibility of anonymous cash transactions should be allowed. Redundancy is essential for resilience and freedom.
  2. Strong, constitutional-level privacy protections. Privacy must be protected by law, not just policy. This includes limits on data collection, retention periods, prohibited uses, and strict rules on sharing with other agencies or foreign governments.
  3. No identity–payments linkage by default. Digital identity systems should not be automatically linked to payments. Access to financial privacy must be preserved.
  4. Programmability limits. Programmable features must be narrowly defined, only used with transparent justification, and never for political control or punitive sanctions without due process.
  5. Independent oversight and judicial redress. Any freezes, restrictions or automated decisions affecting money or identity must be subject to rapid independent review and accessible appeal.
  6. Opt-out and non-discrimination guarantees. Citizens who choose not to use a digital ID or CBDC must not face discrimination or practical exclusion from essential services.
  7. Open, auditable systems and vendor transparency. Any private-sector involvement must be transparent, auditable and bounded by public-interest contracts. No secret algorithms deciding access to fundamental services.
  8. Sunset clauses and pilot transparency. Pilots must be time-limited, independently evaluated and reversible. Expansion must require new democratic approval.
  9. Data minimisation and local control. Only absolutely necessary data should be collected; retention periods must be short and data subject to deletion on request where appropriate.
  10. International safeguards. Cross-border data flows must be stringently regulated with democratic oversight.

Without these safeguards, the introduction of either universal digital ID or identity-linked CBDC should be resisted.

14. Practical steps citizens and civil society can take now

  • Learn and educate: Read plain-language briefings and explain to family, community groups and local councils what digital ID and CBDC proposals entail. Public awareness is essential.
  • Demand impact assessments: Governments should be required to publish privacy, human-rights and economic impact assessments before any pilot proceeds.
  • Lobby for legal guarantees: Pressure legislators to write privacy and due-process protections into statute before technical systems are built.
  • Support independent oversight bodies: Push for empowered data-protection authorities and independent audit mechanisms with real enforcement teeth.
  • Defend cash access: Train local businesses and co-operatives to accept cash and educate communities about its importance.
  • Join or support civil-society groups: Many NGOs are tracking these policies. Collective action is more effective than isolated complaint.

15. Final words: convenience must not become coercion

Technological change is inevitable; digitisation will continue to reshape many parts of life for the better. But technological possibility is not a democratic mandate. Introductions that alter the balance between citizen and state — especially those that make privacy optional and control technical — require much higher standards of democratic consent, legal protection and technical design.

A society that values liberty should be cautious about systems that monitor, programme, and gatekeep essential actions such as buying food, accessing healthcare, or supporting a charity. If your default reaction is unease at the idea of a single, searchable ledger of your life, you are not alone — and there are strong, expert-backed reasons for that unease. Authoritative analyses and privacy authorities have highlighted these very dangers and called for strict safeguards before any introduction is considered. (European Data Protection Supervisor)

If you care about freedom, privacy and the ability to live a life not continuously recorded and evaluated by algorithms or officials, you should examine proposals closely and demand hard legal promises — not soft assurances. Where governments propose digital identity or digital currency schemes, they must be met with rigorous scrutiny, democratic debate and legally enforceable protections. Anything less risks trading the quiet liberties of daily life for a brittle, centrally controlled convenience that we may one day regret.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Why we must object to compulsory digital ID and digital currency