As a medical doctor I extensively use digital voice recorders to document my work. My secretary does the transcription. As a cost saving measure the process is soon intended to be replaced by AI-powered transcription, trained on each doctor’s voice. As I understand it the model created is not being stored locally and I have no control over it what so ever.
I see many dangers as the data model is trained on biometric data and possibly could be used to recreate my voice. Of course I understand that there probably are other recordings on the Internet of me, enough to recreate my voice, but that’s beside the point. Also the question is about educating them, not a legal one.
How do I present my case? I’m not willing to use a non local AI transcribing my voice. I don’t want to be percieved as a paranoid nut case. Preferravly I want my bosses and collegues to understand the privacy concerns and dangers of using a “cloud sollution”. Unfortunately thay are totally ignorant to the field of technology and the explanation/examples need to translate to the lay person.
Do your patients know that their information is being transcribed in the cloud, which means it could potentially be hacked, leaked, tracked, and sold? How does this foster a sense of distrust, and harm the patients progress?
Could you leverage this information and the possibility of being sued if information is leaked with the bureaucrats?
I would have work sign a legal discharge that from the moment I use the technology, none of the recordings or transcription of me can be used to incriminate me in case of an alleged malpractice.
In fact, since both are generated or can be generated in a way that both sounds very assertive but also can be adding incredibly wild mistakes, in a potentially life and death situation, they legally recognise potentially nullifying my work, and taking the entire legal responsibility for it.
As you can see in the most recent example involving Air Canada, a policy has been invented out of thin air. Such policy is costing the company. In the case of a doctor, if the administration of the wrong sedative, the wrong medication, or if the wrong diagnosis was communicated to the patient, etc; all that could have serious consequences.
All sounding (using your phrasings, etc) like you, being extremely assertive, etc.
A human doing that job will know not to derive from the recording. An AI? “antihistaminic” and “anti asthmatic” aren’t too far off, and that is just one example off of the top of my head.
Will they allow you to use your own non-cloud solution? As long as you turn in text documents and they don’t have to pay a person to transcribe, they should be happy. There are a number of speech to text apps you can run locally on a laptop, phone, or tablet.
But of course, it’s sometimes about control and exercising their corporate authority over you. Bosses get off on that shit.
Not sure which type of doctor you are, but there’s a general shortage of NPI people. I hope you can fight back with some leverage. Best of luck.
It will not be possible to use my own software. The computer environment is tightly controlled. If this is implemented my only input device to the medical records will be the AI transcriber (stupidity).
I’m a psychiatrist in the field of substance abuse and withdrawal. Sure there’s a shortage of us too but I want the hospital to understand the problem, not just me getting to use a old school secretary by threatening going to another hospital.
I was afraid that might be the case. Was hoping they would let you upload the files as if you had typed them yourself.
Maybe find some studies / articles on transcription bots getting medical terminology and drug names wrong. I’m sure that happens. AI is getting scary-good, but it’s far from perfect, and this is potentially a low-possibility-but-dangerous-consequences kind of scenario. Unfortunately the marketers of their software probably have canned responses to these types of concerns. Management is going to hear what they want to hear.
Thaks fot he advice but I’m not against using AI-models transcribing me, just not a cloud model specifically trained on my voice without any control by me. A local model or more preferrably a general local model woulf be fine. What makes me sad is that the persons behind this are totally ignorant to the problem.
I understand, and we’re basically on the same page. I’m not fully anti-AI, either. Like any tool, it can be used for good or evil. And you are right to have concerns about data stored in the cloud. The tech bros will mock you for it and then… oh look, another data breach has it been five minutes already. :)
Yes I agree. Broadening the scope a little, I frankly just wait for a big leak of medical records. The system we use is a birds nest of different softwares, countless API:s, all sorts of database backends. Many systems syem from MS-DOS, just embedded in a bit more modern integrated environment. There are just so many flaws and I’m amazed a leak hasn’t happened (or at least surfaced) yet.
Do we work for the same place? 😆
I take this as humour - I understand my situation and IT suite isn’t more insecure than many others :)
my only input device to the medical records will be the AI transcriber
I understand that you keep steering away from legal arguments, but that can’t be legal either. How could a doctor not have direct, manual access to patient records?
Anyway, practical issues:
You need some way to manually interact with patient records in the inevitable event the AI transcription gets it wrong. It only takes one time messing up transcription on something critical and you have a fucking body on your hands. Is your hospital prepared to give patients the wrong dosages because background noise or someone else speaking makes the AI mishear? Who would be held responsible in the case of mistreatment due to mistranscription? Is your hospital willing to be one of the first to try and tackle that legal rats nest?
A secretary is able to do a sanity check that what they heard make sense. AI transcription will have no such logic behind it. It will turn what it thinks it heard into text and chuck it wherever it logs to. It thinks you’ve called for leeches when you said something about lesions? Have fun.
Whenever there’s an issue with the transcription service you’d be screwed too. That could mean network outage, power outage, microphone breaks, any part of this equipment breaks, and this whole system falls apart.
The problem with incorrect transceiption exists with my secretary too. In the system I work in the secretary write my recordibg, sends it to me, I read it. I can edit the text at this point and then digitally sign it with a personal private key. This usually happens at least a day after being recorded. All perscriptions or orders to my nurses are given inannother system besides the raw text in the medical records. I can’t easily explain the practical workings but I really don’t see that the AI system will introduce more errors.
But I agree that in the event of a system failure, there will be a catastrophic situation.
I would suggest that that first action item would be is to ask for (in writing) are 1) data protection and 2) privacy policies. I would then either pick it apart, or find someone who works in cybersecurity (or the right lawyer) to do that. I’ve done it a few times and talked my employer out of a few dodgy products, because the policies clearly try to absolve the vendor of any potential liability. Now, whether the policies truly limit liability would have to be tested in court.
You could also talk about how data protection, encryption, identity and access management, and governance is actually really expensive, but I’d first start poking holes in the actual policies to create doubt.
Your voice-print is worth protecting.
There’s already retirement funds activating “my voice is my password” by default, now. (You can, and absolutely should opt-out, if yours does.) And you can’t change your voice-print if it gets leaked. (Maybe with a professional voice coach, you could…)
Personally, I would change employers over this, if I had the option.
I think we’re heading towards having a group of citizens with compromised voice-prints leaked to the dark web, who have a harder time day to day through no fault of their own. Like the early SSN breach sufferers, history tells us that society says “it’s a shame”, and tries to protect the next generation properly, but doesn’t recompense those hurt by the early bullshit.
While job searching, I would also request an accomodation, and not use the voice system. It’s much easier for the employer to retain a secretary for you, than to deal with the legal hassles that will come up if they try to fire you for not using their legal-gray-area solution.
Even granted the accommodation, I would be looking for my next job though.
Most places use this sort of software (at least, larger companies). I have worked with doctors who refused to use it and instead developed templates for common items they copied + pasted into the MAR software / PACS, etc., and they just type what they need. That’s what they did before dictation software existed anyway. It’s not as efficient, but it’s basically the only way to avoid this.
Stop using the digital voice recorder and type everything yourself. This is the best way to protect your voice print in this situation. It doesn’t work well as a protest or to educate your colleagues, but I suppose that’s one thing you can use your voice for. Since AI transcription is a cost saving measure, there will be nothing you can do to stop its use. No decision maker will choose the more expensive option with a higher error rate on morals alone.
Unfortunately the interface of the medical records system will be changed when this is implemented. The keyboard input method will be entirely removed.
The personalized data model will be trained on your voice. That means that it’s going to be trained on a great deal of patient medical history data (including PII). That means it’s covered by HIPAA.
I strongly doubt the service in question meets even the most minimal of requirements.
I assume you’ll be using Dragon Medical One. Nuance is a well established organization, with users in a broad range of professions, and their medical product is extensively used by many specialists. The health system where I live has been in the process of phasing out transcriptionists in favor of it for a decade or so.
The only potential privacy concerns a hospital would care about would be if they are storing your transcripts on their servers, because that will contain sensitive information about patients. It will be impossible to get any administrator to care about your voice data.
This tide is unlikely one you will be able to stem, but you could stop dictating and type it yourself.
I’m not sure what exact service will be used. I won’t be able to type as the IT environment is tightly controlled and they will even remove the keyboard as an input device for the medical records.
I see that lasting until the first time some record ends up reading “backspace backspace backspace! No you stupid delete! Delete. Dee feet delete”.
Simple jobs are going to continue to go away in favor of more efficient spending.
You’re not going to get around the removal of simple jobs from the market in favor of newer concepts and more complex operations.
All these people that said going to college to further your education was stupid and a waste of money are going to be the first to bitch and moan because the rest of us who spent the time and money to better ourselves would like to reciprocate that same logic into the world so you don’t have to worry about things like underpaid fast food workers spitting in your food, delivery drivers stealing your food, etc.
Some people who can only do “simple” tasks are the ones who stand the most to be hurt by the world moving forward and becoming more advanced and complex, but I’m not sure what we can do to help them outside of seriously considering UBI. The wealth we are generating and saving through automation deserves to be equally spread amongst the people it replaced. That’s fair.
I think it was pretty clear the issue was one of privacy requirements and not any qualms with losing jobs, which isn’t even happening here.
That’s correct! I’m not againt using technology to cut costs or providing better healthcare. My question is entirely about the privacy implications.
They do pretty specifically mention the using their own voice thing, good point.
However I’d like to remind everyone that recording you while in public is done and done so very frequently (look at all the whistle blower docs) so it’s really moot imo whether or not there exists recordings of your voice.
And everything else I said still stands. Idgaf about the doctor who still goes home with some of the highest salaries in the public. Personally, I think medical practitioners should be a part of working for the state or the govt, and you basically become a servant to the public. Imo doctors should be held to the same public scrutiny but that’s a diff topic.
Not in public. This is a conversation with the healthcare provider, not with your partner while you’re at the grocery store. You have a legally recognized right to privacy (at least in the US) when it comes to your health details.
Which is an unequivocally good thing.
You’re mixing up topics. The doctor doesn’t want a voice model made after their own likeness based off these private recordings, but I’m saying there’s already a plethora of ways to record you in public that have been around since at least 9/11 in the US.
It’s a moot thing to be trying to dodge/keep private from my perspective. If anyone can record you while you’re speaking in public, you’re not going to convince anyone that you shouldn’t be able to do it in private with consent forms, terms and service, etc.
I had another idea. You might be able to use something that distorts your voice so that it doesn’t sound anything like you, but the AI can still transcribe it to text. There are some cheap novelty devices on amazon that do this, and also some more expensive pro audio gear that does the same thing. Just a thought.
Sure but what about my peers? I want to get the point across and the understanding of privacy implications. I’m certain that this is just the first of many reforms without proper analysis of privacy implications.
I agree that getting the point across and having them rethink this whole thing is a much better way of handling this than using a tech solution. I am just pessimistic you can change their minds and you might need a plan B.
So what’s your concern? I’m a bit confused.
- Using cloud to process patient data? Or,
- Collecting your voice to train a model?
Ironically, GPT can kinda get you started here…
To present your case effectively to your bosses and colleagues, focus on simplifying the technical aspects and emphasizing the potential risks associated with using a cloud-based AI transcription service:
-
Privacy Concerns: Explain that using a cloud-based solution means entrusting sensitive biometric data (your voice) to a third-party provider. Emphasize that this data could potentially be accessed or misused without your consent.
-
Security Risks: Highlight the risks of data breaches and unauthorized access to your voice recordings stored in the cloud. Mention recent high-profile cases of data breaches to illustrate the potential consequences.
-
Voice Cloning: Explain the concept of voice cloning and how AI algorithms can be trained to mimic your voice using the data stored in the cloud. Use simple examples or analogies to illustrate how this could be used for malicious purposes, such as impersonation or fraud.
-
Lack of Control: Stress that you have no control over how your voice data is used or stored once it’s uploaded to the cloud. Unlike a local solution where you have more oversight and control, a cloud-based service leaves you vulnerable to the policies and practices of the provider.
-
Legal and Ethical Implications: While you acknowledge that there may be existing recordings of your voice online, emphasize that knowingly contributing to the creation of a database that could potentially be used for unethical or illegal purposes raises serious concerns about professional ethics and personal privacy.
-
Alternative Solutions: Suggest alternative solutions that prioritize privacy and security, such as using local AI transcription software that does not upload data to the cloud or implementing stricter data protection policies within your organization.
By framing your concerns in terms of privacy, security, and ethical considerations, you can help your bosses and colleagues understand the potential risks associated with using a cloud-based AI transcription service without coming across as paranoid. Highlighting the importance of protecting sensitive data and maintaining control over personal information should resonate with individuals regardless of their level of technical expertise.
-
This is really weird. Is it common in other countries for doctors to not input the data in the system themselves?
I don’t know if it’s common practise in other countries. In Sweden where I work it is. I think the rationale is the following:
- It’s a lot faster to use a voice recorder.
- A doctor’s time is worth a lot more than a secretary’s (in the sense of pay and rarity)
- Using a voice recorder lets us review lab results, radiology etc at the same time as recording, not having to switch between tasks. -Doctorss wont have to be good spellers or think about building well thought out sentences. We also dont have to look up classification codes for procedures and diagnoses. All this will be done by the secretary.
Of course we have to review the teanscribed result. At my hospital, all doctors carry smart cards and use the personal stoed private key to digitally sign every transcribed medical record entry.
Personally I’d be more worried about leaking patient information to an uncontrolled system than having a voice model made
Thats another issue and doesn’t lessen the importance of this issue. Both are important but separate. One is about patiwnt data, the other about my voice model. Also in thsi case I have no control over the mesical records and it’s already stored outside the hospital in my case.
What, exactly, are your privacy concerns about this?
My biometric data, in this case my voice. Training an AI, tailored to my voice, out of my control, hosted as a cloud solution.
Of course there is an aspect of patient confidenciality too, but this battle is already lost. The data in the medical records is already hosted outside of my hospital.
Sounds like a weak argument. They’re not going to be inclined to operate a local ML system just for one or two people.
I would see if you can get a quote for locally-hosted transcription software you can run on your own, like Dragon Medical. Maybe reach out to your IT department to see if they already have a working relationship with Nuance for that software. If they’re willing to get you started, you can probably just use that for dictation and nobody will notice or care.








