Have you tried asking ChatGPT to rewrite your essay to have the tone and make the points most likely to resonate and be best received by the different groups it identified? For example asking it, show me a version of this essay that would be most likely to resonate with doctors who provide this type of treatment. Or mainstream NPR liberals. Or a version of this essay that has the tone and talking points most likely to resonate in be successfully received by parents of gender non-conforming children. We are always asking each other and your posts and the comments what messages should we use to reach different people. Maybe we should ask ChatGPT?
Relatedly, when I was recently struggling with a dilemma where I had some pretty strong feelings about the problem and where things were going wrong and what needed to be addressed but also feeling very strongly that I was not trusting that I was looking at it from an unbiased perspective or putting my emotions aside , I put the situation and my interpretation into AI and asked it to tell me what points and perspectives I might be missing or misinterpreting. I didn't ask it to tell me what to do, just to identify potential blind spots, logical fallacies, and mistakes in reasoning. It was actually very helpful but at the same time I'm deeply conflicted about using AI that way.
Why are you conflicted about using AI in that way? Unless you fear it will omit something in order to avoid displeasing you in some way (and that really puts you in no worse as state of ignorance that you were to begin with). If you understand it's not omniscient either and you are savvy to its occasional hallucination issues, you're free to accept (after double-checking w/ another source) or reject anything it presents to you.
I was recently told I had done a horrible thing and lost all credibility because I stated I had used AI to quickly check on one specific thing someone had asked me about. I was deliberately open that I had done it because I recently took an ethics course about the use of AI and how its use must always be disclosed. It's yes makes some people very uncomfortable and even deeply angry, and I understand why. And in this particular case I described above, I felt like I was giving up part of my thinking and my humanity to something that is not human, takes jobs away from people, is harming students, and is also an environmental disaster. I recognize its strengths and potential benefits but I cannot ignore all its problems and the harm it also does.
Hmmm . . . yes, I too have noticed the emotional reaction by many to AI. I think it's possibly justified in some ways, but it also reminds me of the knee-jerk reaction people have to self-driving cars.
Your list of the evils that people use to justify being so hostile towards any use of this latest new "tool" invented by humans I think should not be accepted so quickly as indisputable . . . at least not before considering these claims according to the more realistic criteria of trade-offs rather than comparing the state of the world against some tacit presupposition that Utopia is achievable (i.e. I reference Thomas Sowell's very mature observation that "there are no solutions, only trade-offs):
1. AI is not human (maybe that's what's useful about it, considering how a little bit of adrenaline and bad info from gossip makes most humans go "TILT" and do things like join lynching mobs.)
2. AI takes jobs away from people (If you read/listen to economist Tyler Cowen, you can find out about evidence that this latest bit of Luddite theory ain't necessarily so.)
3. AI is harming students (Again, I hear echos of people predicting that inexpensive calculators cause the death of Math education . . . which distracts from other far more impactful phenomena dragging down the quality of education. Again, I recommend one of Sowell's many books laying out the data.)
4. AI is an environmental disaster. (It may be that humankind may need the power of AI to devise the technology that we need to address the environmental issues that concern us all. Again, determining whether a given technology is a net positive or net negative depends on what you include in your calculation . . . and some have ideological reasons for omitting a great deal of relevant factors from the equation for the sake of simple moral tales.)
Anyway, I wouldn't give in to those who attempt to shame you into rejecting this new technology. As I believe Lisa has speculated, it may just be a means to ameliorate some of the psychological and societal damage we've suffered due to social media's tendency to turbo-charge BOTH the better AND worse angels of our nature.
Based on so many of The NY Times readers that I see in the comments section of various articles on the gender subject, I am certain there would be LOUD CHEERING for Lisa’s essay from the majority of them.
Loved your heretic essay, as well as your AI exercise. What a great idea! As a fellow disrupter, I can relate to the loneliness you expressed. Most people are not curious enough to go the extra mile with asking questions, doing some digging, stepping away from herd mentality. It can be a lonely place to be, as speaking honestly can alienate others; it's hard to know whom to trust. I like how toward the end of your essay you share some introspective curiosity as you acknowledge how your energy may have contributed to the alienation. It's a skill I have been working on for myself, which means I really hold back these days unless I know I have "permission" to speak freely and passionately about this. But I am not a journalist - speaking and connecting through the word is your passion and gift, so I think it would be much harder for you to tone things down - and maybe you shouldn't, and the social consequences are a necessary and inevitable by-product of your amazing mission.
I loved reading your essay, and the other contributions to the issue too.
I’m so glad you got those “I still love you” responses!
About ChatGPT: knowing the premise of these “large language models” (that it’s text-prediction!), I could accept their ability, early on, to create the illusion of a sentient partner in a back-and-forth conversation.
But nowadays when I see ChatGPT generate an analysis like this, with multi-part, headings/subheadings structure (doesn’t that require a birds-eye view of the whole? I ask myself)…it’s so much harder for me to think this is still an illusion, to deny that there’s actual understanding going on. Does anyone else feel simultaneously awed and freaked out by this?
Lisa, I appreciate your openness and your hope that tools like ChatGPT might help people see beyond their own ideological filter bubbles. But I think there's a misdiagnosis here: the assumption that if trans allies could, for example, just understand the objections many women have to male bodies in women-only spaces, they'd come to see those objections as valid, or at least worthy of inclusion in the conversation.
In reality:
Many women have already explained their perspective, at length, for years.
They’ve used data, personal stories, philosophy, and law.
Trans allies have heard them—and dismissed them as transphobes.
This isn’t a communication gap; it’s an ontological conflict. If one person believes sex is real and politically salient, and the other believes gender identity overrides sex, then no amount of empathetic framing or AI-powered dialectic will resolve the impasse. It just clarifies it.
Understanding is important—but it’s not the barrier here. The real challenge is that one side sees women's boundaries as essential, and the other sees them as unjust. Until that core conflict is acknowledged, not even the most articulate AI can bridge it.
AI is wonderful; I use it full time. But you are giving it far more credit than it deserves in the context of this discussion. AI is a correlation engine -- it looks at what others have said about the same/similar topics and distills it. Since everyone has an opinion on everything (more or less) and many write about it, it is about our only tool for looking at all of those, grouping them, and then characterizing them.
So when it reports what different people would think, it is because it has already read about very similar (yes, I know you think your stuff is unique but it never is) material and it can correlate the reactions of various folks to it. So there is no moral downside to using its output: if you could find 1,000,000 friends spread around all persuasions and individually ask them their opinions you would likely (if you could put aside personal blind spots) reach the same conclusions.
None of the AI responses are unexpected -- if you think about the groups and the endless materials most of them have produced, these would be their expected reactions. The AI is just good at grouping and distilling them. For as much as you wish to modify your approach to accommodate these perspectives, that would be great. Or the AI will be glad to rewrite your piece into six pieces, each of which appealed to one of the axes more than the rest.
But there is no magic here -- just a reflection of what other HUMANS with various thought pattern groupings have thought about similar topics. Great for informing what you wish to write, but nothing really unexpected.
Again, I recommend listening this interview with philosopher and innovator Simon Cullen. (See "Can This AI Tool Save Campus Dialogue?" on the Heterodox Academy YouTube channel https://www.youtube.com/watch?v=mM5hewQ3Keo ) Description: "In conversation with John Tomasi, Simon explores how open inquiry is both advanced and imperiled by disagreement, and describes his academic journey from Australia to Princeton and Carnegie Mellon. Central to the discussion is ‘Sway’ an AI-powered platform developed by Simon and his team to foster rigorous, evidence-based dialogue among students on controversial topics. Sway intelligently pairs students with opposing views and acts as a “guide on the side,” scaffolding reasoning, encouraging intellectual humility, and ensuring that exchanges remain constructive and charitable. Simon shares the empirical findings from thousands of Sway-mediated dialogues, where measurable increases in students’ openness, comfort, and analytical reasoning have been observed—even on divisive subjects like gender, immigration, and the Israel-Palestine conflict." Cullen and his colleagues have been seeing great success when using the tool to facilitate productive conversations between university students on the topic of trans: https://www.swaybeta.ai/ It's free. Check it out!
Have you tried asking ChatGPT to rewrite your essay to have the tone and make the points most likely to resonate and be best received by the different groups it identified? For example asking it, show me a version of this essay that would be most likely to resonate with doctors who provide this type of treatment. Or mainstream NPR liberals. Or a version of this essay that has the tone and talking points most likely to resonate in be successfully received by parents of gender non-conforming children. We are always asking each other and your posts and the comments what messages should we use to reach different people. Maybe we should ask ChatGPT?
Relatedly, when I was recently struggling with a dilemma where I had some pretty strong feelings about the problem and where things were going wrong and what needed to be addressed but also feeling very strongly that I was not trusting that I was looking at it from an unbiased perspective or putting my emotions aside , I put the situation and my interpretation into AI and asked it to tell me what points and perspectives I might be missing or misinterpreting. I didn't ask it to tell me what to do, just to identify potential blind spots, logical fallacies, and mistakes in reasoning. It was actually very helpful but at the same time I'm deeply conflicted about using AI that way.
Why are you conflicted about using AI in that way? Unless you fear it will omit something in order to avoid displeasing you in some way (and that really puts you in no worse as state of ignorance that you were to begin with). If you understand it's not omniscient either and you are savvy to its occasional hallucination issues, you're free to accept (after double-checking w/ another source) or reject anything it presents to you.
I was recently told I had done a horrible thing and lost all credibility because I stated I had used AI to quickly check on one specific thing someone had asked me about. I was deliberately open that I had done it because I recently took an ethics course about the use of AI and how its use must always be disclosed. It's yes makes some people very uncomfortable and even deeply angry, and I understand why. And in this particular case I described above, I felt like I was giving up part of my thinking and my humanity to something that is not human, takes jobs away from people, is harming students, and is also an environmental disaster. I recognize its strengths and potential benefits but I cannot ignore all its problems and the harm it also does.
Hmmm . . . yes, I too have noticed the emotional reaction by many to AI. I think it's possibly justified in some ways, but it also reminds me of the knee-jerk reaction people have to self-driving cars.
Your list of the evils that people use to justify being so hostile towards any use of this latest new "tool" invented by humans I think should not be accepted so quickly as indisputable . . . at least not before considering these claims according to the more realistic criteria of trade-offs rather than comparing the state of the world against some tacit presupposition that Utopia is achievable (i.e. I reference Thomas Sowell's very mature observation that "there are no solutions, only trade-offs):
1. AI is not human (maybe that's what's useful about it, considering how a little bit of adrenaline and bad info from gossip makes most humans go "TILT" and do things like join lynching mobs.)
2. AI takes jobs away from people (If you read/listen to economist Tyler Cowen, you can find out about evidence that this latest bit of Luddite theory ain't necessarily so.)
3. AI is harming students (Again, I hear echos of people predicting that inexpensive calculators cause the death of Math education . . . which distracts from other far more impactful phenomena dragging down the quality of education. Again, I recommend one of Sowell's many books laying out the data.)
4. AI is an environmental disaster. (It may be that humankind may need the power of AI to devise the technology that we need to address the environmental issues that concern us all. Again, determining whether a given technology is a net positive or net negative depends on what you include in your calculation . . . and some have ideological reasons for omitting a great deal of relevant factors from the equation for the sake of simple moral tales.)
Anyway, I wouldn't give in to those who attempt to shame you into rejecting this new technology. As I believe Lisa has speculated, it may just be a means to ameliorate some of the psychological and societal damage we've suffered due to social media's tendency to turbo-charge BOTH the better AND worse angels of our nature.
Based on so many of The NY Times readers that I see in the comments section of various articles on the gender subject, I am certain there would be LOUD CHEERING for Lisa’s essay from the majority of them.
Loved your heretic essay, as well as your AI exercise. What a great idea! As a fellow disrupter, I can relate to the loneliness you expressed. Most people are not curious enough to go the extra mile with asking questions, doing some digging, stepping away from herd mentality. It can be a lonely place to be, as speaking honestly can alienate others; it's hard to know whom to trust. I like how toward the end of your essay you share some introspective curiosity as you acknowledge how your energy may have contributed to the alienation. It's a skill I have been working on for myself, which means I really hold back these days unless I know I have "permission" to speak freely and passionately about this. But I am not a journalist - speaking and connecting through the word is your passion and gift, so I think it would be much harder for you to tone things down - and maybe you shouldn't, and the social consequences are a necessary and inevitable by-product of your amazing mission.
I loved reading your essay, and the other contributions to the issue too.
I’m so glad you got those “I still love you” responses!
About ChatGPT: knowing the premise of these “large language models” (that it’s text-prediction!), I could accept their ability, early on, to create the illusion of a sentient partner in a back-and-forth conversation.
But nowadays when I see ChatGPT generate an analysis like this, with multi-part, headings/subheadings structure (doesn’t that require a birds-eye view of the whole? I ask myself)…it’s so much harder for me to think this is still an illusion, to deny that there’s actual understanding going on. Does anyone else feel simultaneously awed and freaked out by this?
(I’m reminded of this segment of This American Life, from June 2023: “First Contact” https://www.thisamericanlife.org/803/greetings-people-of-earth/act-one-18 )
Understanding Isn’t the Issue—It’s the Axioms
Lisa, I appreciate your openness and your hope that tools like ChatGPT might help people see beyond their own ideological filter bubbles. But I think there's a misdiagnosis here: the assumption that if trans allies could, for example, just understand the objections many women have to male bodies in women-only spaces, they'd come to see those objections as valid, or at least worthy of inclusion in the conversation.
In reality:
Many women have already explained their perspective, at length, for years.
They’ve used data, personal stories, philosophy, and law.
Trans allies have heard them—and dismissed them as transphobes.
This isn’t a communication gap; it’s an ontological conflict. If one person believes sex is real and politically salient, and the other believes gender identity overrides sex, then no amount of empathetic framing or AI-powered dialectic will resolve the impasse. It just clarifies it.
Understanding is important—but it’s not the barrier here. The real challenge is that one side sees women's boundaries as essential, and the other sees them as unjust. Until that core conflict is acknowledged, not even the most articulate AI can bridge it.
AI is wonderful; I use it full time. But you are giving it far more credit than it deserves in the context of this discussion. AI is a correlation engine -- it looks at what others have said about the same/similar topics and distills it. Since everyone has an opinion on everything (more or less) and many write about it, it is about our only tool for looking at all of those, grouping them, and then characterizing them.
So when it reports what different people would think, it is because it has already read about very similar (yes, I know you think your stuff is unique but it never is) material and it can correlate the reactions of various folks to it. So there is no moral downside to using its output: if you could find 1,000,000 friends spread around all persuasions and individually ask them their opinions you would likely (if you could put aside personal blind spots) reach the same conclusions.
None of the AI responses are unexpected -- if you think about the groups and the endless materials most of them have produced, these would be their expected reactions. The AI is just good at grouping and distilling them. For as much as you wish to modify your approach to accommodate these perspectives, that would be great. Or the AI will be glad to rewrite your piece into six pieces, each of which appealed to one of the axes more than the rest.
But there is no magic here -- just a reflection of what other HUMANS with various thought pattern groupings have thought about similar topics. Great for informing what you wish to write, but nothing really unexpected.
Again, I recommend listening this interview with philosopher and innovator Simon Cullen. (See "Can This AI Tool Save Campus Dialogue?" on the Heterodox Academy YouTube channel https://www.youtube.com/watch?v=mM5hewQ3Keo ) Description: "In conversation with John Tomasi, Simon explores how open inquiry is both advanced and imperiled by disagreement, and describes his academic journey from Australia to Princeton and Carnegie Mellon. Central to the discussion is ‘Sway’ an AI-powered platform developed by Simon and his team to foster rigorous, evidence-based dialogue among students on controversial topics. Sway intelligently pairs students with opposing views and acts as a “guide on the side,” scaffolding reasoning, encouraging intellectual humility, and ensuring that exchanges remain constructive and charitable. Simon shares the empirical findings from thousands of Sway-mediated dialogues, where measurable increases in students’ openness, comfort, and analytical reasoning have been observed—even on divisive subjects like gender, immigration, and the Israel-Palestine conflict." Cullen and his colleagues have been seeing great success when using the tool to facilitate productive conversations between university students on the topic of trans: https://www.swaybeta.ai/ It's free. Check it out!
Excellent essay, Lisa.
Fascinating issue of Queer Majority!