Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
podcast
Filter by Categories
ArcGIS Pro
GDAL
GeoJson
Map
Python
QGIS
Uncategorized

ChatGPT and Large Language Models

Exploring Chat GPT-4 And Large Language Models

In this episode, the focus is on ChatGPT-4 and large language models – how they have revolutionized our interaction with machines, opening a world of possibilities and raising intriguing questions about our technological future.

About The Guest

Daniel Whitenack is a data scientist at SIL International and also co-hosts an AI podcast called Practical AI. At SIL International, they specialize in language-based work, including literacy, education, and translation; along with mapping language populations worldwide. Daniel is also building Prediction Guard, a tool designed for integrating AI into applications.

What Is Chatgpt-4?

On the frontend, ChatGPT-4 is a user-friendly chat interface that answers queries in natural language text. It is a causal-based language model, which means it predicts what comes next in the sequence of characters based on what came prior in the sequence of characters. The AI system takes a user’s natural language text and attempts to predict what would naturally follow – essentially completing the sequence of characters.

How Do Predictive Language Models Work?

At their core, language models are powered by neural networks. These neural networks can be viewed as a pipeline of data transformations: you input a set of data, which undergoes a series of transformations through a myriad of functions, and you receive a different set of data at the other end. Often, this data is a sequence of numbers that can be decoded into text.

Training Predictive Language Models

A vital aspect of predictive language models is the parameters that help shape the data transformations. These parameters can number in the billions but the challenge is in finding the right parameters that create the desired transformation. A model has to learn to find the right parameters and gets better at it with repeated training and human feedback.

Typically, a model’s process of finding the right parameters is akin to trial and error. You give the model a large array of text examples and set it a task of predicting the next word in the sequence. By doing this repeatedly with varied sequences, the model learns to adjust its parameters, just like a child learns to speak by mimicking and repeating words. This simple yet labor-intensive learning happens on a massive scale, with large models training on texts from the entire internet or vast collections of books. The scale is the secret ingredient that gives these models their immense predictive power.

Why Human Input Is Needed in The AI Training Process

Despite the considerable strides made in AI, they remain, at their core, advanced pattern matchers. The models derive probable combinations of text based on the patterns they have observed and learned. However, without human input, this autonomous learning can lead to perplexing, and sometimes disturbing results.

Take the instance of the AI model Galactica. It was trained extensively on scientific literature, and so, when asked an absurd question like the number of giraffes that have visited the moon, it provided a detailed, scientific response complete with citations. Clearly, this is not the response a human would ideally want. By integrating human judgment and intuition, these models are refined and fine-tuned to generate more accurate and contextually appropriate predictions.

What Does Temperature Mean In AI?

The balance between predictability and creativity in AI models presents a paradox. The ‘sameness’ induced by pattern-matching contrasts starkly with the creativity these models exhibit. This is made possible by an aspect called ‘temperature’.

In AI, temperature is a hyperparameter that controls the degree of variation the model can exhibit while choosing the next probable sequence. It essentially introduces an element of creativity in the otherwise structured world of AI, mimicking the human ability to express the same idea in various ways.

Having this variability opens up possibilities for AI to act as a creative muse, prompting new ways to approach challenges and tasks. However, high temperature can lead to unpredictable and inconsistent results, highlighting the crucial role of human involvement in configuring the parameters according to specific needs.

How AI Hallucinations Occur

AI hallucinations occur when an AI model produces coherent yet factually incorrect information. This phenomenon can even occur with low ‘temperature’ settings, as it is fundamentally tied to the data the model has been trained on. If the model is prompted with a question that is misrepresented, or not represented at all in its training data, it might fabricate an answer based on the patterns it has learned. To mitigate this, it is advisable to practice AI grounding.

Countering AI Hallucinations with AI Grounding

AI Grounding refers to the practice of infusing external, factual knowledge into the prompts. This can be done by integrating a reliable knowledge base or a set of documentation into the model. By prompting the model based on this grounded context, it is effectively anchored to factual reality, reducing the risk of hallucinations.

A Surprising Creativity in AI

The initial expectation for AI was that it would excel in logic but lack creativity. However, the reality has turned out to be the complete opposite. AI models today demonstrate a surprising degree of creativity, capable of generating images from text or even creating a unique rap song. Yet, they often fall short when it comes to logical consistency, requiring human intervention to ensure factual consistency and grounding in reality.

Prompt Engineering In AI

Prompt engineering, as it applies to artificial intelligence (AI), refers to the structuring of prompts or requests to elicit the most accurate, relevant and appropriate responses from AI models. Like a Google search, you input a query and expect an answer, but there’s more to it when dealing with AI.

In AI, a good prompt is grounded in reality and context. This grounding differentiates it from a simple Google search as it involves injecting external knowledge and setting parameters around which the AI should generate a response.

Another dimension of prompt engineering is controlling the AI’s output. You can design the prompt to instruct the AI on what to do when it doesn’t find an answer in the given context. For example, the prompt could instruct the AI to respond with an apology if it cannot find an answer.

AI As an Evolutionary Tool in Coding

In the coding domain, AI has the potential to serve as a digital pair programming assistant. This is already evident in the usage of tools like GitHub’s Copilot, which augments code writing. These tools can predictably generate code structures, automating repetitive tasks like pulling data from SQL databases.

However, the issue of ‘hallucination’ remains a significant challenge. Even though AI can generate near-perfect code, human intervention is often required to ensure accuracy and usability. This makes AI more of a code-generation aid (saves considerable time and effort) than a fully autonomous code-writing entity.

A Future with AI Technology: A Tool for Evolution or Elimination?

The critical question is whether we should fear AI or embrace it. The concerns typically revolve around job loss due to automation or a dystopian future where AI gains sentience and seizes control. However, a more balanced view acknowledges that while AI systems have their risks, they are tools that transform the way we work rather than eradicate employment.

AI systems, much like the evolution from typewriters to computer word processors, introduce changes to job roles rather than eliminating them entirely. While certain roles may become obsolete, new ones emerge that require an evolved set of skills. For instance, data entry jobs are more abundant now than during the era of typewriters, albeit in a different format.

Concerns with AI should focus more on the over-reliance on AI systems without adequate understanding of their limitations and without robust engineering to manage edge cases and potential system failures.

This is perhaps the quote for the episode that I have spent the most time thinking about

“We always thought AI would be logical and lack creativity – but it is almost the exact opposite”

This reframes the idea of being wrong to being creative which I think you could argue really depends on the context! 

If you have not already played around with ChatGPT it’s well worth spending the time to experiment with it … while its still free 😉 

https://chat.openai.com/auth/login

Further listening 

If you have not already listened to this episode about computer vision and GeoAI you might find it interesting. Listen out for the discussion around plausible / realistic data and real measurements – I think this gives more context to the use cases for generative AI 

You might also enjoy this episode about fake satellite imagery 

BTW  I have started a job board for geospatial people

feel free to check it out!

About the Author
I'm Daniel O'Donohue, the voice and creator behind The MapScaping Podcast ( A podcast for the geospatial community ). With a professional background as a geospatial specialist, I've spent years harnessing the power of spatial to unravel the complexities of our world, one layer at a time.