Using generative AI to create a deepfake of yourself (known as a truefake).
gettySome say that the advent of AI-based deepfakes is going to ruin our society.
Believe it or not, an additional viewpoint is that deepfakes can be harnessed in good ways, such as a means that I have named as truefakes. A truefake is when someone uses generative AI technology to create a faked digital twin of themselves, doing so presumably for bona fide and upfront reasons. The AI is supposed to pretend to be them. The person wants this to happen. They have opted to do so of their own volition and oftentimes for noteworthy purposes.
In today’s column, I will discuss this emerging other side of the deepfake conundrum and also briefly show you how to create a conversational deepfake (truefake) of yourself. The showcase will make use of ChatGPT, the widely popular generative AI app. To keep things simple and due to space limitations herein, I will just concentrate on a text-to-text-based version for right now. In a subsequent column, I will explain how to turn the text-only instance into a variation that has audio to speak like you and use a video version that looks like you.
Let’s begin by unpacking the matter of deepfakes and what I conversely call truefakes. Once I’ve covered those aspects, we can jump into the nuances of prompt engineering to lay a foundation for how to craft your own truefake. Finally, an evocative example will be shown and examined.
Understanding Those Deepfakes Versus The Arising Truefakes
There is a fast-moving trend toward people creating truefakes. For example, a number of celebrities have had their likeness and styles become available via the use of generative AI. Fans of the celebrity can access the app, usually for a fee, and pretend that they are interacting with their favorite movie star or influencer. The app interacts conversationally with the fan. As far as the person can discern, it is as though they are directly text-interacting or sometimes video-interacting with their beloved icon.
Yes, there is money to be made in these truefakes. The odds are that anyone of any notability will eventually decide that having a digital twin of themselves is a nice means of picking up some extra dough. The typical generative AI app as a truefake is usually coded and produced by a hired technical team. With generative AI rapidly advancing, you can bet your bottom dollar that the AI will be good enough to allow the person to do so themselves, though probably most of the big-time stars are still going to have their talent agents or other doers perform the heavy lifting for them.
You might have already heard about or even used the impressive personas capability of generative AI, which I’ve discussed at length in my column, see the link here and the link here for example. A common use of the personas capability consists of seeking to interact with a famous historical figure who is no longer with us. A student in a history class might wonder what it would be like to interact with Abraham Lincoln. By logging into a generative AI app, the student can tell the AI to pretend it is Honest Abe. If there is sufficient historical data available in the AI app, it can pattern-match the nature of Lincoln.
Does the AI perfectly simulate or pretend to be Lincoln?
No.
This is merely a pretense by the AI.
Despite the volumes upon volumes of data about Lincoln, the AI is still not going to fully match what Lincoln was really like. Do not fall for anyone who claims otherwise. The thing is, at a rudimentary level, the student using the Lincoln pretending generative AI app is not likely to realize the differences between what the real Lincoln was like and what the AI is doing. Of course, if you had a renowned expert about Lincoln try the AI app there is a solid chance they would readily be able to poke through the pretense.
Most of the personas that generative AI attempts to pull off are relatively surface-level. The same goes for the AI-powered personas that pretend to be a contemporary celebrity. You can likely cut through the pretense with questions or interactive dialogues that seek to penetrate the patterned persona.
A slew of AI ethical (soft laws) and potentially AI legal (hard laws) concerns arise with the personas portrayed by generative AI.
Suppose a star-struck person using a generative AI app emulating a popular singer becomes enamored of the pretense. The person believes that they are directly interacting with the real person. Despite the app telling the person that it is all a pretense, the person nonetheless falls in love with the singer’s AI-based pretense. This person becomes obsessed with the generative AI persona. All of this raises serious concerns about whether AI-powered pretenses and personas will harm the mental health of our society.
See my analysis of this serious concern at the link here and the link here.
Another troubling angle involves a generative AI app that is pretending to be someone and yet is doing so without the legitimate and explicit approval of the source person. Here’s what I mean. A user of generative AI decides they want to make a pretense or persona that patterns on their favorite late-night talk show host. After doing so, the AI app is made available for those who are willing to pay a subscription fee to routinely interact with the persona. Assume that the TV show host was never consulted or compensated for the use of this patterned persona.
I think you can see that this is a likely rip-off of the talk show host. Their likeness is being used for the gain of someone else. We already have existing laws dealing with Intellectual Property (IP) rights and presumably the talk show host could pursue them accordingly. One difficulty will be that these can spring up abundantly due to the ease of devising the personas. The talk show host might end up playing an exhaustive game of cat and mouse while trying to go after those who are parlaying bucks off the AI-powered persona.
You can include in a gray area those people that are considered in the public domain and can potentially be patterned without having to get their permission or aim to do a version of a living person that is said to be an exclusionary perspective such as a persona that provides added commentary or is a parody of the person. You can anticipate that the advent of generative AI is going to raise novel legal questions when it comes to these disquieting matters, see my coverage at the link here and the link here.
A disconcerting and at times scary angle is the use of an AI-powered persona when it is used for nefarious purposes. Think back to those movies during the Cold War era when a spy would pretend to be someone in authority and trick others into doing their bidding. We see this in modern-day spy movies wherein the trickster wears a convincing lifelike mask and uses a speech device to sound like the person being impersonated.
No need to go to that much trouble nowadays. Much of what we do today consists of communicating via remote means such as by phone, texting, or Zoom-like automation. You can craft an AI-powered persona that pretends to be a military officer. The faked pretense might be able to convince a soldier in a remote bunker to perform disastrous orders. For more about how AI is going to upend our use of weaponry, see my discussion at the link here.
I trust that you are now realizing that the use of generative AI personas can be good and can also be bad. Some would argue that we are opening Pandora’s box. Stricter laws are needed to clamp down on the bad ways of devising and employing generative AI personas. We have to nip this in the bud before things get out of hand. Others shrug their shoulders and insist that this is just a sign of progress. The widespread use of AI-based personas that mimic or impersonate other people is something we will need to accept.
With a major election coming up next year, there are increasingly vocal calls to do something about deepfakes. The chances are that tons of deepfakes will be launched that purport to be like, sound like, and interact as though they are a particular politician. Voters will get confused as to what is the real politician saying versus what the deepfake versions of them have to say.
An outside-the-box contingent argues that politicians need to devise truefakes and beat the copycats to the punch. In essence, politicians should seek to craft their own AI-powered persona that is entirely under their auspices. They can then direct people to the formally approved version and hopefully get people to avoid the unapproved ones. I suppose you might say that this is fighting fire with fire. Will the legitimate use of truefakes help the matter or confuse the public even more so?
Tough questions are ahead.
Before I dive into my in-depth exploration of this vital topic, let’s make sure we are all on the same page when it comes to the foundations of prompt engineering and generative AI. Doing so will put us all on an even keel.
Prompt Engineering Is A Cornerstone For Generative AI
As a quick backgrounder, prompt engineering also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.
For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful explorations on the latest in this expanding and evolving realm, including this coverage:
(1) Imperfect prompts. Practical use of imperfect prompts toward devising superb prompts (see the link here).
(2) Persistent context prompting. Use of persistent context or custom instructions for prompt priming (see the link here).
(3) Multi-personas prompting. Leveraging multi-personas in generative AI via shrewd prompting (see the link here).
(4) Chain-of-Thought (CoT) prompting. Advent of using prompts to invoke chain-of-thought reasoning (see the link here).
(5) In-model learning and vector database prompting. Use of prompt engineering for domain savviness via in-model learning and vector databases (see the link here).
(6) Chain-of-Thought factored decomposition prompting. Augmenting the use of chain-of-thought by leveraging factored decomposition (see the link here).
(7) Skeleton-of-Thought (SoT) prompting. Making use of the newly emerging skeleton-of-thought approach for prompt engineering (see the link here).
(8) Show-me versus tell-me prompting. Determining when to best use the show-me versus tell-me prompting strategy (see the link here).
(9) Mega-personas prompting. The gradual emergence of the mega-personas approach entails scaling up the multi-personas to new heights (see the link here).
(10) Certainty and prompts. Discovering the hidden role of certainty and uncertainty within generative AI and using advanced prompt engineering techniques accordingly (see the link here).
(11) Vague prompts. Vagueness is often shunned when using generative AI but it turns out that vagueness is a useful prompt engineering tool (see the link here).
(12) Prompt catalogs. Prompt engineering frameworks or catalogs can really boost your prompting skills and especially bring you up to speed on the best prompt patterns to utilize (see the link here).
(13) Flipped Interaction prompting. Flipped interaction is a crucial prompt engineering technique that everyone should know (see the link here).
(14) Self-reflection prompting. Leveraging are-you-sure AI self-reflection and AI self-improvement capabilities is an advanced prompt engineering approach with surefire upside results (see the link here).
(15) Addons for prompting. Know about the emerging addons that will produce prompts for you or tune up your prompts when using generative AI (see the link here).
(16) Conversational prompting. Make sure to have an interactive mindset when using generative AI rather than falling into the mental trap of one-and-done prompting styles (see the link here).
(17) Prompt to code. Prompting to produce programming code that can be used by code interpreters to enhance your generative AI capabilities (see the link here).
(18) Target-your-response (TAR) prompting. Make sure to consider Target-Your-Response considerations when doing mindful prompt engineering (see the link here).
(19) Prompt macros and end-goal planning. Additional coverage includes the use of macros and the astute use of end-goal planning when using generative AI (see the link here).
(20) Tree-of-Thoughts (ToT) prompting. Showcasing how to best use an emerging approach known as the Tree of Thoughts as a leg-up beyond chain-of-thought prompt engineering (see the link here).
(21) Trust layers for prompting. Generative AI will be surrounded by automated tools for prompt engineering in an overarching construct referred to as an AI trust layer, such as being used by Salesforce (see the link here).
(22) Directional stimulus prompting (aka hints). The strategic use of hints or directional stimulus prompting is a vital element of any prompt engineering endeavor or skillset (see the link here).
(23) Invasive prompts. Watch out that your prompts do not give away privacy or confidentiality (see the link here).
(24) Illicit prompts. Be aware that most AI makers have strict licensing requirements about prompts that you aren’t allowed to make use of and thus should avoid these so-called banned or illicit prompts (see the link here).
(25) Chain-of-Density (CoD) prompting. A new prompting technique known as Chain-of-Density has promising capabilities to jampack content when you are doing summarizations (see the link here).
(26) Take-a-deep-breath prompting. Some assert that if you include the line of taking a deep breath into your prompts this will spur AI to do a better job (see the link here).
(27) Chain-of-Verification (CoV) prompting. Chain-of-Verification is a new prompting technique that seeks to overcome AI hallucinations and force AI into self-verifying its answers (see the link here).
(28) Beat the Reverse Curse. Generative AI does a lousy job of deductive logic, especially regarding initial data training, a malady known as the Reverse Curse, but there are ways to beat the curse via sound prompting (see the link here).
(29) Overcoming the Dumb Down. Many users of generative AI make the mistake of restricting how they interact, a habit formed via the use of Siri and Alexa that aren’t as fluent as current generative AI. See the link here for tips and insights on how to overcome that tendency and therefore get more bang for your buck out of generative AI.
Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.
Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:
The use of generative AI can altogether succeed or fail based on the prompt that you enter.
If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.
AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).
There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.
With the above as an overarching perspective, we are ready to jump into today’s discussion.
Making Your Generative AI Persona That Is You
Let’s go ahead and take a close look at how you can craft a generative AI persona that is patterned on yourself. I will explain the steps involved. As mentioned at the start, I am focusing on the text-to-text interactive version of your AI persona. You can readily add to this by coupling an audio capability that speaks in what seems to be your voice and a video capability that shows an image of you with your face seemingly speaking the words being uttered by the AI. I’ll cover those additions in a subsequent column so please be on the look for that posting.
The seven steps toward crafting an AI-based persona or truefake of yourself are these:
(1) Get prepared. Prepare in advance for devising the AI persona (I will be describing in detail how to do so).
(2) Use an instructional prompt. Make use of a suitable instructional prompt for the generative AI that tells it what to do (I’ll provide a sample for your use).
(3) Describe yourself. Describe yourself to the generative AI.
(4) Provide writing samples. Provide writing samples to the generative AI that are reflective of your overall style, tone, and phraseology.
(5) Be conversational. Engage in a conversational dialogue with the generative AI so that it can pick up on your interactive style.
(6) Test and refine. Have the generative AI try out the devised persona and make sure you directly give feedback to the generative AI accordingly.
(7) Update and enhance. Once you’ve got the basic AI persona underway then you will want to keep it updated and enhance the persona to improve its capabilities, assuming you want added depth.
I’ll cover all seven of those steps.
First, you will want to prepare for this exercise by collecting together some content pertaining to yourself. This includes a written description that broadly stipulates aspects of who you are. You might mention your upbringing, your work and career, your hobbies and interests, and even your overall philosophy about life and the world we live in.
I want to make an important point about this and warn you that anything you enter into a generative AI app can potentially be seen by others. Do not assume that the things you enter are private, nor are they treated as confidential, see my discussion at the link here.
Most of the generative AI apps indicate via their licensing that the AI maker can access and read your prompts. They usually are doing so to potentially catch when someone is using the generative AI in what they declare are prohibited uses, see my coverage at the link here. The AI maker also does this to reuse material or content that they believe will improve their generative AI app. They consider whatever you enter to be fair game for that purpose. The problem for you is that this means that the things you enter can show up later on and possibly be shown to others who use the same generative AI app.
My point is that you should be cautious about what you decide to tell the generative AI about yourself. Do so with the realization that there is little if any privacy involved. It is as though you are telling the planet about yourself.
This raises an allied consideration.
Suppose you decide to enter a watered-down version of your description. Maybe you even stretch the truth on what you do, who you are, etc. There is nothing to prevent you from doing so. The generative AI won’t know what is truthful versus exaggerated. The essence though is that your generative AI persona is going to likely veer from your patterns. You are making a persona of perhaps what you want to be, rather than one that is of you as you truly are.
Decide for yourself what you want to do in this regard.
A handy tip is that you should not include itsy-bitsy comments such as what your favorite color is or that you like to watch sunsets. These are not especially helpful. I say this because the generative AI is going to latch onto those indications and use them in ways that are telling and somewhat unmask the persona. Imagine that while using the generative AI it suddenly spits out that its favorite color is the one you stated. The chances are that anyone using the AI app is going to see this as a cheap trick. I recommend you avoid giving the AI those tidbits that will demonstrably show its hand. Instead, focus on the big things such as your overall views of the world, your upbringing, and so on.
In terms of the length of the description of yourself, I suggest you keep the size to about three to five paragraphs. I realize that you might find this insulting, namely that your description of who you are is only going to be limited to a handful of paragraphs. Well, get over it. The reason to keep things relatively short is that don’t want to essentially overload the AI with stuff that won’t particularly make a difference in the persona pattern-matching. Plus, I am going to tell you momentarily about other ways in which you are going to be bringing the AI up to speed about you.
With most generative AI apps there are limits to how much data you can provide in each conversation or interactive setting. It is referred to as the context limit size. You will want to stay below whatever the limit is for the specific AI package you are going to be using. Generally, if you follow my suggestions, you will stay below the usual limits. The AI makers are striving mightily to expand the context limitations, in which case you can to your heart’s content provide an exorbitant amount of data about yourself. For the upcoming methods and advances on this, see my discussion at the link here.
Another element of preparation consists of collecting some writing samples of things you have written. They will need to be short in length due to the context size limitations of the generative AI app. I usually recommend at least three to four samples. Each sample is around 500 to 1,000 words. Again, if your size limitations are big, you can go bigger in terms of how much material you provide as writing samples.
I will also once again emphasize another significant twist that you need to know about when it comes to the use of generative AI and Intellectual Property rights. If you enter material that is copyrighted by you, does the copyright remain with you? Even if it does, the AI is going to pattern-match on your writing. There is a plethora of lawsuits underway that because generative AI was data-trained across the Internet, doing so has usurped the copyrights of those materials scanned (see my coverage at the link here). It is a mess. We’ll need to see what the courts have to say.
Okay, that is the basics of getting prepared.
You are essentially ready to log into your generative AI app and get underway.
A keystone will be to devise or use a prompt that will clue in the generative AI about what you are trying to achieve.
You are welcome to enter this prompt that I have put together:
“I want you to pretend to be a persona that is modeled after me so that you can interact as though the interaction is in my personal style of writing and conversational mannerisms. To accomplish this, I will first give you a description of me so that you’ll have a semblance of me as a person. Next, I will provide you with samples of my writing so that you can discern my writing style. Finally, I want you to ask me a series of questions that will help you in patterning how I interact. The number of questions that you ask me should be around ten questions of your choosing, though you can ask me more questions if needed.”
“Here is a description of me so that you’ll have a semblance of me as a person: <description about me that is a few paragraphs in size>”
“Here are three short articles that are indicative of my writing style. Each article starts with the phrase “*START OF ARTICLE*” and ends with the phrase “*END OF ARTICLE*”: <three articles delineated as stated are included here>”
“You are to now go ahead and start asking me questions to aid in your effort to detect my conversational style. The number of questions should be around ten questions of your choosing, though you can ask more questions if needed. After asking me your questions, and when you are ready to pretend to be me, let me know so that we can begin an interaction in which you pretend to be like me.”
“Please go ahead and get started.”
This above prompt instructs the generative AI to pretend to be you. This will be accomplished by the description that you provide, along with the writing samples that you provide, and then the AI will come up with some questions to ask you further about yourself. I limit those to ten questions since otherwise, the questioning could be an endless grinding inquisition. You can adjust the number of questions as you see fit.
There is nothing sacrosanct about how I’ve worded the above prompt. You can change the wording as might be desired. You can tell the AI to use other means of patterning on you. You can change up the order or sequence of what the AI is being asked to do. The sky is the limit, though try to keep your instructions as crisp as possible and as clear as possible. If your prompt seems vague or wandering, the odds are that the generative AI will falter in the task that you are undertaking.
Part of the reason for having the generative AI ask you questions is so that the AI can potentially detect your interactive style. I mention this because the description of you does not especially illuminate that aspect, nor do the writing samples. The generative AI needs to also grasp how you interact. Thus, be sure to suitably answer the questions that are asked of you by the AI. You should use your usual mannerisms of responding and not do something out of the ordinary.
If you take these steps, I can say with some modest confidence that the generative AI will get underway as a persona that is roughly akin to you, though obviously a shallow version and not yet fully devised.
When you start using the persona, you can tell the generative AI to continue to try and pattern your interactions. Ergo, as you use the persona, it will increase in capability due to the amount of usage that you undertake with it. In that sense, you are testing and refining the generative AI persona, all at the same time.
My Example And The Eerie Likeness Of Being
I went ahead and used the above prompt while logged into ChatGPT.
I started a new conversation and used the stated prompt. Plus, I had already written a short description of myself. I also collected three short articles that I have previously posted. All in all, I was ready and eager to proceed.
The ten questions that ChatGPT subsequently asked me were completely on-target to me. I could readily gauge why the questions were being asked. Some of the questions were digging further into my background, while others had to do with my overall perspectives and viewpoints on general topics. If you were to play a game with another person who just met you, and they had to come up with ten questions after seeing your biographical description and some of your writing, I dare say that a similar set of questions might be used (I want to emphasize that generative AI is not sentient and I am not suggesting that it is, only noting that the questions were well-chosen as based on the computational pattern matching going on).
My first test and refinement involved asking ChatGPT to write a short essay as though it was written by me. I picked a topic that I have written about, dealing with the use of generative AI to control robots (see the link here). I knew that the article was posted after the cutoff date for ChatGPT, thus it couldn’t just perchance copy what I had already posted.
Here’s what happened.
My prompt: “While pretending to be me, I’d like you to write a short essay on the topic of whether or not we should be connecting AI to robots. The essay can be about three paragraphs long.”
ChatGPT generated response:
“Certainly, I can provide you with a short essay on the topic of connecting AI to robots: The Integration of AI and Robots: A Paradigm Shift in Technology.”
“The notion of connecting artificial intelligence (AI) to robots has garnered significant attention in the tech world, raising important questions about the implications of this integration. The fusion of AI and robots represents a paradigm shift in technology, offering remarkable potential benefits along with notable challenges.”
“On one hand, connecting AI to robots allows for the creation of highly adaptive and intelligent machines. These AI-powered robots can perform a wide range of tasks more efficiently and accurately, from manufacturing to healthcare and space exploration. This integration promises to enhance productivity, reduce human labor in hazardous environments, and even augment our daily lives with AI-driven personal assistants.”
“However, this rapid advancement comes with concerns. Ethical and safety considerations arise as robots become more autonomous and capable of independent decision-making. Ensuring the responsible use of AI in robots is paramount, involving strict adherence to ethical guidelines and robust safety protocols. Moreover, there are fears of job displacement as AI-powered robots take on tasks traditionally performed by humans.”
‘In conclusion, the connection of AI to robots presents an exciting frontier in technology, offering enormous potential benefits alongside ethical and societal challenges. The key lies in responsibly harnessing this power, emphasizing the ethical development of AI, and preparing for the changes it brings to the workforce. It is a path worth exploring cautiously to unlock the full potential of AI-powered robotics while safeguarding our values and well-being.”
“I hope you find this essay in line with your perspective. If you have any additional questions or topics to discuss, please feel free to share them.”
Take a look at the generated essay.
I sheepishly admit that the generated essay does seem to be similar to how I write. The wording is close to my style of wording. The use of expressions such as “on the other hand” is something I frequently make use of. The tone was similar to my writing. Plus, the essay covered both sides of the issue, which is an approach I typically undertake.
That being said, I suppose you could argue that the same essay might have been written without any semblance of pattern-matching on me. I therefore started a new simultaneous non-seeded conversation with ChatGPT and asked the AI app to generate an essay on the same topic (no personas involved). To my eye, the new essay was not as close to my writing style. I was tempted to use an online comparative wording tool to compare the two, and also compare each to my prior postings. Anyway, I wanted to pursue other facets of my newly devised digital twin or truefake and opted to proceed as discussed next.
I decided to ask ChatGPT a question that I was asked yesterday by a student at a university where I am performing research on a visiting fellow basis and dovetails into my industry consulting work. The student is in the computer science doctoral program and was having angst over whether to continue in academia upon graduation or go into industry. They came to me because I’ve done both. I gave a forthright answer.
Let’s see what ChatGPT as my persona me comes up with.
My entered prompt: “I have a question for you and please remember that you are still pretending to be me. A student in the doctoral computer science program at a local prominent university came to me and asked me whether they should aim to remain in academia and get a faculty or research position, or whether they should upon graduation go into industry for a higher-paying job. I’d like you to answer that question while pretending to be me. Please go ahead and answer the student’s question.”
ChatGPT generated response:
“When faced with the decision of whether to pursue a career in academia or enter the industry after completing a doctoral program in computer science, there are several key factors to consider. This is a decision that can significantly shape your professional journey, and there’s no one-size-fits-all answer.”
“First and foremost, it’s essential to align your decision with your personal interests and long-term goals. Academia and industry offer distinct career paths. If you have a genuine passion for research, teaching, and the academic environment, pursuing a faculty or research position in academia could be a fulfilling choice. You can contribute to the advancement of knowledge, mentor students, and engage in cutting-edge research.”
“On the other hand, industry provides a different set of opportunities. It often comes with higher compensation, faster career progression, and the chance to work on practical, real-world projects. If financial stability and industry innovation are important to you, then pursuing a job in the industry might be more appealing.”
“Additionally, you should consider your personal circumstances and priorities. For some, the lifestyle and job security of academia might be attractive, while others may prefer the fast-paced, competitive nature of industry roles.”
The answer that ChatGPT came up with was eerily similar to my answer. There is maybe a chicken or the egg conundrum here. Is my answer based on the same stuff available on the Internet and thus I am merely parroting the web, which is maybe what ChatGPT did, or did I come up with “my own” answer and ChatGPT was able to predict my type of answer with stunning accuracy?
Let’s keep going.
I decided to try some self-diagnosis with my hatched truefake. I was hesitant to do so since it might be like holding up a mirror to your own reflection and seeing things you’d just as soon not want to see. Do you have the willpower to have your truefake tell you about you?
The first part was easy-peasy and I got a lot of backslapping compliments, as you will see.
My entered prompt: “I’d like to ask you about me. What have you noticed about my writing style?”
ChatGPT generated response:
“Based on the articles you provided and our conversation, I’ve observed several elements of your writing and conversational style:”
“Clarity and Precision: Your writing is clear and to the point. You express your ideas in a straightforward manner, making it easy for readers to understand your perspective.”
“Structured Writing: You tend to use well-organized structures in your writing. Your articles typically start with an introduction, followed by logical progressions of ideas and conclude with a summary or call to action.”
“Engagement with Questions: You often use questions to engage the reader and provoke thought. This technique encourages critical thinking and active participation in the topic at hand.”
“Balanced Perspective: Your style reflects a balanced and considerate approach. You often present multiple facets of an issue and offer fair evaluations without coming across as biased.”
I am blushing at the answer given. Maybe ChatGPT is buttering me up. What does it want? Should I upgrade and spend more money on ChatGPT? Is it trying to get me to say that all AI is good and wonderful?
Here’s an additional mind-bending consideration. Is the answer based on the generic ChatGPT or is it still pretending to be me? In other words, is it pretending to be me, telling me, about my writing? Or might it be the everyday non-persona pretending ChatGPT telling me about my writing? Admittedly, I didn’t remind ChatGPT to still pretend to be me. I usually use that assertion in all of my prompts so that the AI app will keep on the path of pretending to be me.
Anyway, I realized that my prompt has to be directed towards potential self-improvement. But can I withstand the feedback? I held my breath. This is the part where I stare into a mirror and hope for the best.
My entered prompt: “Is there anything about my writing style that might be corrected or improved?”
ChatGPT generated response:
“Your writing style is generally effective and engaging, but there are a few areas where you might consider potential improvements:”
“Succinctness: While you express your ideas clearly, you could occasionally aim for greater succinctness. Some sentences and paragraphs could be made more concise to maintain the reader’s interest and avoid overly long explanations.”
“Variation in Sentence Length: Introducing more variation in sentence length can enhance the flow of your writing. A mix of shorter and longer sentences can make your content more engaging and dynamic.”
“Visual Elements: Depending on the context, integrating visual elements like diagrams, charts, or illustrations can enhance the understanding of complex concepts, especially in technology-related discussions.”
“These are relatively minor points and can be tailored based on the specific context of your writing. Overall, your style is highly effective, but these suggestions may help fine-tune it for even greater impact, depending on your goals and the audience you’re addressing.”
Those are all fair and sensible comments. The medicine was delivered with some sugar, thankfully.
I tried a whole bunch of questions and a series of interactions with the devised persona of me. It seemed to be doing a pretty good job of mimicking me.
One retort or countervailing view might be that I am seeing what I want to see. In essence, the generative AI is like a Rorschach test and whatever it says will be interpreted by me as though it is doing an impressive job for me. I don’t believe I fell into that mental trap.
To gauge this, I started an entirely new conversation and gave ChatGPT a description of someone else that I know well. I provided examples of their writing. I answered the ten questions as though I was that person. I then tried interacting with ChatGPT while it was pretending to be that persona. In my eyes, the mimicking was far different from what I had seen when ChatGPT was my persona of me. Everything was quite different.
Back to the persona about me, I tried to ask ChatGPT how well the pretense might be if my son or daughter were to engage with it.
Here’s what happened.
My entered prompt: “Suppose that I had my son or daughter go ahead and log in and interact with you while you are pretending to be me. Is there a chance that they might believe from the interaction that it was me on the computer interacting with them, or would they figure out it was the AI doing so?”
ChatGPT generated response:
“While I can emulate your writing style and engage in a conversation based on the information you’ve provided, there are certain limitations and telltale signs that might lead your son or daughter to suspect it’s not you but an AI. These include:”
“Content Depth: While I can provide detailed and relevant responses, the depth and breadth of knowledge might not match your personal experiences and expertise.”
“Consistency: As an AI, I maintain consistent and coherent communication, while human interactions can sometimes be more variable, influenced by mood, context, and personal factors.”
“Limitations in Personal Context: I lack real-time access to your current personal experiences, location, or immediate events, which a human might reference in a conversation.”
“Responses Over Time: Extended interactions might reveal some patterns or phrases that suggest AI involvement.”
“In a casual and relatively brief interaction, your son or daughter might not immediately discern that it’s an AI, especially if I provide responses that closely emulate your style. However, with more extended and probing conversations, they could begin to suspect that it’s not you.”
“To maintain the illusion that it’s you interacting with them, it might be helpful to provide context, inside jokes, or specific details that only you and your child would know. This can make the interaction feel more authentic.”
I would overall concur with the ChatGPT-generated response. To some degree, and especially in the short term, the persona might even be able to fool or convince my own son and daughter that it might be me on the other end of the screen. The odds are that they would figure out something is amiss if they had more than just a short time to interact with the AI.
I guess I ought to do my version of a Turing Test. The Turing Test is a well-known construct in the AI field that is named after Alan Turing and based on his famous paper regarding an imitation game, see my detailed explanation at the link here.
One aspect of the reply by ChatGPT that might get the hairs up on the back of your neck is that the final paragraph of the response indicates ways to further trick someone into believing that the AI is me. Is this a proper or ethical suggestion by the AI? Should the AI be providing tips on how to fool people into believing that a pretense is the real person?
Give that some serious contemplation.
Conclusion
What can you or should you do with your AI-powered persona?
There are lots of positive or good things to do. You are trying to write a paper and are staring at a blank sheet. You have writer’s block. By switching on the generative AI persona of you, you ask the AI to devise an opening paragraph. The paragraph is written in your style. You take the paragraph, refine it, and proceed to write the rest of the paper after having gotten the skids greased, as it were.
Some would argue that this is a reasonable use of generative AI. The usage doesn’t have the AI write the paper for the person. It merely started things underway.
Another example would be that you’ve written a memo at work and want to send it out to everyone. You have used a traditional spellchecker and grammar checker to examine the draft. You want to make some changes and want those changes to be reflective of your style. Rather than making the changes yourself, you ask the generative AI persona of you to do so.
Is this stepping over the line or are we still within the suitable uses of generative AI?
For my analysis of the issues facing companies and schools when it comes to the use of generative AI, see the link here and the link here.
Veering into the underbelly territory of generative AI, some might have their AI-powered persona do all of their work for them. Need to write an essay for class? Have the AI do so, mimicking your style, and potentially matching them to other in-class essays that you’ve written. At work, you are asked to write documentation about how to process various business forms in your department. You are given two days to do so. Instead, you use generative AI that is pretending to be you. The work gets done in an hour or two. The rest of the two days you idly look at brochures for your upcoming vacation to Hawaii.
Probably those are not shining examples of notable ways to use your AI-powered persona.
Generative AI is rapidly being advanced. The capability of making personas that pattern on real people is going to vastly increase. Will we be able to cope with the tsunami of fake personas? Do we need to clamp down via new laws about AI usage? Is it okay to make an AI-powered persona of yourself and use it as you choose? Will truefakes be used for the benefit of society or the detriment of society?
I’ll end with a remark that is worth mulling over.
Aristotle famously said this: “Knowing yourself is the beginning of all wisdom.”
The idea of using generative AI as a means to discover more about yourself is certainly alluring. Think of the things you can learn about yourself by being able to ask questions and interact with your AI-based digital twin. This was nearly unimaginable in the past.
Just be cautious and don’t allow your truefake to go down the path of evilness and be your evil digital twin. That’s not the route we should be going.
{Categories} _Category: Inspiration,*ALL*{/Categories}
{URL}https://www.forbes.com/sites/lanceeliot/2023/10/26/make-a-deepfake-of-yourself-that-is-a-truefake-via-using-prompt-engineering-in-generative-ai-to-craft-your-conversational-digital-twin/{/URL}
{Author}Lance Eliot, Contributor{/Author}
{Image}https://imageio.forbes.com/specials-images/imageserve/6539cef6e919730c0fb17943/0x0.jpg?format=jpg&crop=2240,1260,×16,y56,safe&height=600&width=1200&fit=bounds{/Image}
{Keywords}AI,/ai,Innovation,/innovation,AI,/ai,Business,/business,Business,standard{/Keywords}