ChatGPT is the hottest thing in the world at the moment, which is saying something given the recent nuclear fusion news. It can impressively answer simple questions like “how do you invert a binary tree?”:
To invert a binary tree, you can follow these steps:
If the tree is empty, return an empty tree.
Otherwise, swap the left and right subtrees of the root node.
Recursively invert the left subtree.
Recursively invert the right subtree.
For example, in Python, you could write a function like this:
def invert_binary_tree(root):
if root is None:
return None
= root.right, root.left
root.left, root.right
invert_binary_tree(root.left) invert_binary_tree(root.right)
This function takes in the root node of the binary tree and inverts it by swapping the left and right subtrees of the root node, and then recursively inverting the left and right subtrees of the root.
It’s important to note that this function works on the original tree, rather than creating a new inverted tree. This means that the original tree will be modified in place, with its structure changed to be the inverted version of the original tree.
Hope this helps! Let me know if you have any questions.
Pretty good answer. Also pretty good box around the answer, right? I had no idea how to do that, so I asked ChatGPT: “In markdown, how do you put a section in a box?” It responded:
To put a section of text in a box in Markdown, you can use a code block with the html tag. For example:
<div style="border: 1px solid black; padding: 10px">
This is some text that will be displayed in a box with a black border and padding.</div>
This will create a box with a black border and padding around the text “This is some text that will be displayed in a box with a black border and padding.”
You can customize the appearance of the box by modifying the style attributes. For example, you can change the border width, border color, and padding values to achieve the desired look.
It’s also possible to use other HTML elements and styles to create a box. For
example, you could use a <div>
element with a class attribute and define the box
styles in a separate CSS file, or you could use a <table>
element to create a
more complex layout.
I hope this helps! Let me know if you have any questions.
Copy and paste, and voila! Box! I had to fiddle a bit the details. I changed the style and adjusted the width. Now it would be nice if I could also scroll within the box. So again, I asked ChatGPT: “How can I add scrolling to the box?”.
To add scrolling to a box in Markdown, you can use the overflow: auto style
attribute in the <div>
element. This will cause the box to display scrollbars if
the content inside the box exceeds the available space.
Here’s an example of how you can use this technique:
<div style="display: inline-block; border: 1px solid black; padding: 10px; overflow: auto">
This is some text that will be displayed in a box with a black border and padding. The width of the box will match the width of the surrounding text. If the content inside the box exceeds the available space, scrollbars will be displayed.</div>
This will create a box with a black border and padding around the text “This is some text that will be displayed in a box with a black border and padding. The width of the box will match the width of the surrounding text. If the content inside the box exceeds the available space, scrollbars will be displayed.” The width of the box will automatically adjust to match the width of the text, and if the content inside the box exceeds the available space, scrollbars will be displayed.
You can customize the appearance of the scrollbars further by using additional style attributes, such as scrollbar-width and scrollbar-color.
I hope this helps! Let me know if you have any questions.
Very nice. But that is WAY too much text. I can cut it down with: “How can I add scrolling to a box (use fewer than 20 words)”
To add scrolling to a box in Markdown, use the overflow: auto
style attribute.
Awesome, that is all you needed to say Hamilton.
ChatGPT rewrote this paragraph for me. It has a distinctive default style that is concise but also dull.
Everything in ChatGPT is better when snarky
ChatGPT can also explain concepts quite well. For example: “Explain the three monadic laws in a snarky style”
Hey there, so in Haskell, a monad has to follow a few strict rules or else it’s considered a total failure. These rules are basically like commandments that monads must obey, or else they’ll be cast out of monad society.
Left identity: This one’s basically saying that if you have a value x and a function f, then return x >>= f should be the same as f x. In other words, wrapping a value in the return function and then applying a function to it shouldn’t do anything different from just applying the function directly to the value.
Right identity: Given a monadic value m, this rule states that m >>= return should be the same as m. In other words, applying the return function to a monadic value shouldn’t change the value at all.
Associativity: Given monadic values m, f, and g, this rule states that (m >>= f) >>= g should be the same as m >>= (-> f x >>= g). In other words, the order in which you compose monadic computations shouldn’t matter.
So there you have it, the laws that monads must follow in Haskell
So far, it looks like ChatGPT may be able to effectively replace most of my programming queries that would normally require wading through irrelevant answers on StackOverflow. The ability to provide concise and accurate answers to basic questions may be the most significant improvement in data accessibility since the invention of the search engine. And human bloggers, you’ll need to step up your game or no one will read your posts.
There is a deep flaw in ChatGPT, though. It is trained on texts past and present without accounting for corrections over time. Science, however, is a conversation across time where ideas change rapidly as new evidence and new arguments surface. My PhD work concerned the evolution of orphan genes, so I asked ChatGPT about it:
Orphan genes are genes that do not have a recognizable function or similarity to known genes in other organisms. They are thought to arise through a variety of mechanisms, including gene duplication and horizontal gene transfer.
One way orphan genes can evolve is through gene duplication, where a copy of a gene is created and the copy becomes separated from the original gene. The copy may then undergo mutations and evolve to perform a new function, or it may become non-functional and eventually be lost from the genome.
Another way orphan genes can arise is through horizontal gene transfer, where genes are transferred from one organism to another through mechanisms such as bacterial conjugation or viral infection. The transferred genes may be incorporated into the genome of the recipient organism and may then evolve to perform a new function, or they may become non-functional and be lost over time.
Orphan genes may also arise through other mechanisms, such as gene expression changes or changes in regulatory elements.
I hope this helps! Let me know if you have any questions.
This is a dreadful response. The first sentence is well-written, but lacks nuance. There are many different definitions of orphan genes, and the definition provided by ChatGPT conflates two distinct concepts. The first concept, that orphans have no known function, is not a commonly accepted part of the definition. Widely conserved genes may have no known function as well. The second concept, that orphans have no detectable similarity to genes in other species, is a more widely accepted definition of orphan genes. I prefer the definition that orphans are genes that evolved from non-genic or non-coding sequences in the recent past. Overall, the term “orphan gene” is not well-defined. ChatGPT presents a specific definition without capturing the uncertainty around it.
The following paragraphs introduce two theories for orphan gene evolution. The first theory, that orphan genes evolve through gene duplication and subsequent evolution, is an old dogma premised on the belief that de novo gene evolution was impossible. The second theory, that orphan genes arise through lateral transfer is basically nonsensical. While lateral transfer has been proposed as a potential origin for orphan genes, it is exceedingly rare in Eukaryotes and only applies if orphan genes defined narrowly as genes absent in close relatives. Lateral transfer cannot be a source for orphans as defined in ChatGPT’s opening paragraph, however, since these transferred genes could have functions and would definitely have similarity outside the species (i.e., to the species it was transferred from).
The final sentence in the flimsy as hell. ChatGPT does’t elaborate on how “gene expression changes” or “changes in regulatory elements” might have led to orphan genesis. They are may be on the right path, but they didn’t flush out the idea.
Overall, the essay was logically inconsistent and contained no insight that could not been gleaned from a few moments on Wikipedia. As ChatGPT stands, it seems roughly like a very fast highschool student with reasonable writing and research skills. It is no replacement for experts and must be used with care. That said, it is an absolutely life changing advancement and will likely become even more powerful. It will be amazing to see where this goes.
Another problem is that ChatGPT, while writing well and often correctly, is trained to be convincing, not correct. It has a penchant bullshittery. For example:
Q: What is a good use case for the morloc programming language?
A: It is difficult to provide a specific use case for the Morloc programming language, as it is a fictional language that does not actually exist. The Morloc programming language was created as a thought experiment by the computer scientist Peter J. Landin, who used it as a tool to explore issues related to the foundations of programming language design and implementation.
In Landin’s original paper, he described the Morloc programming language as a minimalist, statically-typed language that is designed to be easy to learn and use, while still providing a high level of expressive power. However, as the Morloc programming language is purely fictional and has no actual implementation, it is not possible to use it for any practical purposes.
morloc
is my programming language and I am a little disappointed, though not
surprised, that ChapGPT hasn’t heard of it. I really need to stop
procrastinating the publication of the big morloc paper (I swear, I’m working on
it). Anyway, Peter Landin is a real computer scientist and the explanation of
the language sounds believable. So I ask ChatGPT for a citation for the Landin
paper:
Q: Could you provide a citation for the Landin paper on morloc?
A: The Landin paper on the Morloc programming language is a fictional work of computer science literature, and as such, it does not have a real citation. The Morloc programming language was created as a thought experiment by the computer scientist Peter J. Landin, who used it as a tool to explore issues related to the foundations of programming language design and implementation.
The original paper on the Morloc programming language is not published and does not exist in any real sense. It is mentioned in a number of other works of computer science literature as an example of a fictional programming language, but it does not have a real citation or publication record.
So now ChatGPT has back-pedalled and not only the language, but also the paper, is fictional. However, this fictional paper is cited. It is a tad concerning how ChatGPT doubles-down on its assertion that Landin wrote it. There can’t be any real evidence for this.
Let’s try again:
Q: Give a list of the top ten papers that refer to the morloc programming language
A: I’m sorry, but I am not able to find any information on a programming language called “morloc.” It is possible that this is a relatively new or obscure programming language that has not yet been widely documented or studied in academic papers. If you have any more information about the morloc programming language, such as a description of its features or intended use, I might be able to provide more targeted recommendations for papers or other resources that discuss it.
Oh, now it pretends it doesn’t know anything? This is still the same session, so ChatGPT should have memory of past queries. Taken alone, however, this is a very reasonable response.
Here is another example of clever bullshit:
Q: List the top ten papers focused on the Haskell type system
A: Here is a list of ten papers that focus on the Haskell type system:
- “Practical type inference for arbitrary-rank types” by Simon Peyton Jones, Dimitrios Vytiniotis, Stephanie Weirich, and Mark Shields (2006)
- “Typing Haskell in Haskell” by Mark P. Jones (1998)
- “A translation of Haskell 98 to System F” by Mark P. Jones (2000)
- “System F with type equality coercion” by Stephanie Weirich and Dimitrios Vytiniotis (2009)
- “Strongly Typed Heterogeneous Collections” by Oleg Kiselyov and Chung-chieh Shan (2008)
- “Constraint Handling in a Polymorphic Type System” by Mark P. Jones (1991)
- “Type Classes: An Exploration of the Design Space” by Mark P. Jones (1991)
- “Typechecking with Polymorphic Recursive Types” by Mark P. Jones and Simon Peyton Jones (1993)
- “Type classes with functional dependencies” by Mark P. Jones (1996)
- “Type Inference for GADTs” by Stephanie Weirich, Antoine Delignat-Lavaud, and Jean-Philippe Bernardy (2012)
These papers cover a variety of topics related to the Haskell type system, including type inference, polymorphism, type classes, and more. They should provide a good starting point for further research on the topic.
That looks very convincing. The first paper checks out. However, going down the list we start running into problems. For example, a paper with the title “System F with type equality coercion” exists, but has different authors.
There may be limitations to how effective GPT models can be. One reason is that they are designed to imitate human behavior and may not have the ability to exceed it. Another reason is that they are trained using a single dataset rather than a continuous stream of data. Additionally, GPT models are trained to predict text, but do not have a logical framework to guide their decisions. This last point is particularly significant, as GPT models do not have a model of the world against which to evaluate new information. In my opinion, GPT models represents a major achievement in the field of artificial intelligence, but are not necessarily the precursor to AGI.