Introduction to ChatGPT

None

Practical tips to improve your prompts

When we explore the vast universe of artificial intelligence, we inevitably come across two great outstanding pioneers, the first of which is Google, betting on this technology in January 2014 by acquiring the company DeepMind, a company dedicated to the research and development of artificial intelligence. Then, in October 2015, another leader in this field was born and it was OpenAI, giving an important advance with its technological tool, ChatGPT. This innovative system has left a significant mark on the artificial intelligence landscape, cementing OpenAI's position as a key player in this exciting journey into the technological future. 

OpenAI 

OpenAI emerges as a non-profit entity focused on the advancement of artificial intelligence in an ethical and responsible manner, with the commitment to openly share all the results and advances obtained, adopting a comprehensive transparency policy that is reflected in the name Open- AI that could be translated as open Artificial Intelligence for the benefit of society. Although the company had large notable investments, including the support of Elon Musk, the costs associated with training artificial intelligence models were too high, therefore, in order to compete with renowned laboratories such as Google or Meta, OpenAI could not continue to be a non-profit organization. In 2019, Microsoft becomes a crucial strategic partner for the company, providing an initial investment of $1 billion, and in November 2022, the ChatGPT tool is publicly launched for the first time.

ChatGPT 

ChatGPT is the generative artificial intelligence tool developed and trained by OpenAI. Based on the GPT (Generative Pre-trained Transformer) architecture, this creation not only understands human language, but can also generate content coherently and contextually. Its versatility makes it a valuable resource in our daily lives, as it can perform a wide range of tasks, from writing texts to generating useful recommendations. In addition, its ability to maintain fluid conversations in different languages positions it as a tool of constant use and thanks to the training that has been given to it with a large amount of data from various languages, the underlying architecture of GPT adapts and captures the common linguistic patterns of these and learns them. It currently handles more than 9 languages, including: English, Spanish, German, French, Portuguese, Japanese, Korean, among others. 

How to have an effective conversation with ChatGPT? 

Although ChatGPT can understand and respond to most input text, its responsiveness varies depending on the context and complexity of the questions asked. In informal, everyday situations, this ability might seem adequate; However, if you seek to obtain precise responses or automate or improve processes with the help of AI, correcting prompts or instructions at runtime is not an efficient or viable practice. 

To achieve this, it is essential to provide clear and effective instruction that generates responses that are as accurate as possible. Therefore, it is crucial to have a well-defined and optimized prompt structure. An effective prompt structure encompasses several essential elements:  

 

  • • A well-defined context not only limits the topic of the conversation, but also enriches the response, since ChatGPT contains information on everything found on the Internet, by limiting its search field it can generate more precise and faster responses.
  • • The precise instructions that guide the model on what is expected.
  • • The inputs that help the model better understand the task can be the objective to improve. For example, it can be a text to summarize or examples that enrich the context and therefore the answer. Entries are usually enclosed in quotes (“”).
  • • The desired outputs or outputs to ensure that the response meets the user's expectations. For example, display the answer in a table, in an ordered list, with a specific beginning or even in bullets.

Examples of prompts 

Let's look at a simple example, if we pass the following prompt to ChatGPT: “what are the best songs of the 90s”, it will return a response like this: 

 

But, if we improve the prompt and add more context, if we make a more detailed instruction and also specify how we want the response to be displayed, we will obtain something more elaborate. For the following prompt “Generate a list of the 10 best rock songs of the 90s and generate a table with the columns of name, genre and year of publication”, we will get an output like this: 

  • • Context: best rock songs of the 90s. 
  • • Instruction: generate a list.
  • • Output: Table with name, gender, and publication year columns. 

 

Now let's analyze the following prompt: "Take the role of a Pablo Neruda-style writer, write a poem that meets these guidelines: 

1. Write the poem in three stanzas.  

2. includes the name "pepita" in the poem. 

3. Includes the words 'eyes', 'hair' and mouth. 

4. The first stanza begins with "When looking at the sky." "

We clearly notice a context such as a writer, in this case in the style of Pablo Neruda, but it could be the role of an engineer, an English teacher or an idea generator. We also noticed very detailed instructions for creating a poem, some inputs such as a person's name, and an output that specifies that the poem be written in three stanzas. An output for the prompt is shown below, but each time we run the prompt we will get a different response that meets the given specifications. 

 

We can also generate our own translator. Analyze the following prompt: “You are an English translator, for each user input, you must return their translation following these guidelines:  

1. always save the response for the user in the "response" variable.  

2. save the user's question in the "question" variable. 

3. return everything as plain text. 

'hello, how are you?' “ 

The response to this instruction would be: 

 

If you wanted to create an application for translations, it would be very useful if instead of returning the response in plain text, it returned it in the syntax of a programming language, which can be easily executed. 

Limitations of ChatGPT and other language models 

Although artificial intelligence models improve day by day, there are still some limitations that must be taken into account when interacting with them. 

  • • Lack of deep understanding: These models lack deep understanding. It is possible to tell the model to take on the role of a doctor or a trainer and then recommend some exercises or foods to improve your health and it will indeed give you an answer, but in reality these models lack true understanding and awareness to understand the effect of the recommendations given for your health. 
  • • They generate convincing but incorrect answers: these artificial intelligence models have a limitation called hallucinations. They can generate very convincing but incorrect answers, since within their calculations their answer is correct. Therefore, if you are unknown about a topic and artificial intelligence is being used to generate content or research, it is always better to have an expert on the topic review the results. For example, if we have the following prompt: "Take the last letters of the words "Thinking different, making better" and concatenate them." One of the possible wrong answers is this: 

     

  • • They are sensitive to bias: It is not a mystery that these models were trained with information found on the Internet, for this reason, AI models learn from historical data that can reflect existing biases in society, such as racial and gender discrimination. or other prejudices. If these data contain biases, it is likely that these biases will be noticeable in the model responses.

     

In summary, optimizing the instructions passed to ChatGPT can have a significant impact on the quality of responses and the efficiency of the interaction process. By providing clear, specific, and well-structured instructions, we can better guide the model toward generating more relevant and useful responses. This not only improves the experience, but also saves time by reducing the need for additional fixes or iterations. Furthermore, we can maximize its usefulness and unlock its full potential in a variety of contexts.