Sunday, December 24, 2023

Of attrition and workflows - designing digital transitions preserving the mission

 We would like to call your attention on this good NPR episode about a hospital that tried to replace pagers with encrypted text messaging and failed.




So what happened? You might already be imagining the classical triad: slow adoption, cultural resistance, skyrocketing costs.

But no, this time NPR does the right thing and investigates for us the consequences of the early "success" of this digital innovation: flow of information and workflows were devastated by the sudden removal of attrition in communication, with the consequence that the very actions the digital tool wanted to improve, lost meaning within the organisation.

In essence: the encrypted texts meant the friction was so low to get a consultation that people started bombarding the on-call residents and they essentially stopped responding to the texts. 

This kind of effects is something that anyone trying to introduce technology to healthcare should reflect upon (can we remind you the value chain maturity mapping toolkit?). 

Success or failure is determined not by just sheer speed no matter where in your organisation, but by harmonising workflows, team designs, and organisational borders, to ensure information and speed serve your mission, and do not become an obstacle of their own. If you believe every barrier is a problem to attack, think again about the dams in The Netherlands.

Saturday, December 9, 2023

MEPs reached a political deal with the Council on a bill to regulate AI in Europe - the first of its kind in the world

 On Friday the 8th of December 2023, Parliament and Council negotiators reached a provisional agreement on the Artificial Intelligence Act. This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact.


Banned applications
Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit:
  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

Law enforcement exemptions
Negotiators agreed on a series of safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.
“Real-time” RBI would comply with strict conditions and its use would be limited in time and location, for the purposes of:
  • targeted searches of victims (abduction, trafficking, sexual exploitation),
  • prevention of a specific and present terrorist threat, or
  • the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).
Obligations for high-risk systems
For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed. MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour, are also classified as high risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.

Guardrails for general artificial intelligence systems
To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements as initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
For high-impact GPAI models with systemic risk, Parliament negotiators managed to secure more stringent obligations. If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.

Measures to support innovation and SMEs
MEPs wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain. To this end, the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market.

Sanctions and entry into force
Non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.

The agreed text will now have to be formally adopted by both Parliament and Council to become EU law. Parliament’s Internal Market and Civil Liberties committees will vote on the agreement in a forthcoming meeting.

2nd International ONEHEALTH Conference's position paper

The following technical document summarizes highlights of the 2nd International One Health Conference (https://onehealthconference.it/) held in Barcelona, Spain, 2023. The document serves as a roadmap for future health leaders, detailing several concrete scientific evidence, reflections, and action points that were discussed and have the potential to be pivotal if operationalized in sustainable integrated policies.

Read the document in full at this link


Piano Giovani per l’Europa & POSITION PAPER - SOSTENIBILITA' E MARE

Composta da oltre 94 associazioni e realtà giovanili italiane, la Rete Giovani si impegna per la giustizia intergenerazionale. Durante l'...