ChatGPT On the Level

If we’re using this platform in an attempt to find reasonable answers to our questions, we might hit upon something. If we wish to have it solve our problems without any forethought on our own, it might be a different story.

Some questions have answers that are not earthmoving or lifechanging. Consider that some answers may (as in school expulsion).

Can you write as eloquently as the above example? Don’t think your teacher won’t notice that. Never give a ChatGPT response, even cleverly edited, to your teacher. Show some respect. Learn from the response; follow it up with further reading, and quote those sources as credits to your answer.

The same applies here, on this learning platform. We already know that the GPT platform can answer most questions, but where did it get that information? We’re here to learn just that. Once we’ve absorbed that information we can actually understand the GPT responses. In that sense it is a positive, since some assimilation is taking place. This is a learning tool if one really wants to learn. If all one wants is to give the impression they have learned, it will be dust on the solar panel of the Mars Rover, nothing more. The machine will eventually fail.


Notice how the response includes detailed information about both the meaning and the etymology without expressing any distinction or direct purpose. That is what truly astounds me about this platform.

A positive; but, I still question it’s overall reliability, which is reasonable skepticism, in my view. Trust, but verify. A sure maxim if one’s life is on the line.

Further aside

We should not ignore the overlap that this has with search. I think we can differentiate between an interactive AI object, and a query based object. When I ask Google if 1087 is a prime number, it answers, ‘Yes’. What more succinctness, or AI does one need to present that answer? But think of the technology that instantly produced it!

The answers we will get from GPT will not be results based in terms of a list of possible target eligible sites (SERP) but a grammatically correct narrative, in prose form, of a response to the question. This presents with a rather profound distinction, and one which shows further biases that might well slip under the radar.

We are profoundly bound to respect reason and truth and should weigh against anything that is virtually produced if there is potential for an untruth, as liberal an idea as this is. Never consider this platform as an authoritative source, even as much as it might be. Search further and dig up the sources that it has found to parrot.

Last aside

FTR, so far I’ve not asked the platform questions to which I did not already know the answer. That it can give me those answers in agreeable fashion is really astounding, and commendable. If I ever get an answer that is arguable, we can be sure I will document it, ftr.

1 Like


This would be considered a success, in some respects.


Notice how it corrected the name?

Objectivity, insight and even foresight which could easily be seen as optimism. This is amazing.

Interestingly enough, there appears to be a time lag in the collection of real data:

We do in fact know that it is closest to us at roughly this moment in time, or within the next day or two. After that it will be on its way to gone forever.

As originally stated, this is going to be a phenomenal platform, but I wouldn’t gamble my degree on it, or high school graduation.


The elephant in the room here is that it could spout data from 2022 yet excuse itself from anything collected after 2021. Oye, I hope they fix that in a hurry.

…and I do not have personal emotions such as laziness.

Glad you think so!

Isn’t happiness a personal emotion as well?


From the article:
“Notably in 2021, Timit Gebru, a prominent researcher working on responsible AI, published a paper⁶ that warned of the many ethical issues related to LLMs which led to her be fired from Google. These warnings span a wide range of issues⁷: lack of interpretability, plagiarism, privacy, bias, model robustness, and their environmental impact. Let’s dive a little into each of these topics…”
“Deep learning models, and LLMs in particular, have become so large and opaque that even the model developers are often unable to understand why their models are making certain predictions. This lack of interpretability is a significant concern, especially in settings where users would like to know why and how a model generated a particular output.”


I’d especially focus on the “Baked in Bias” section.
There’s also a plethora of info/articles/papers in the footnotes section.



This presenter describes a very practical use of AI technology, one of reviewing data from radio telescopes and finding patterns in the signals.

This at least doesn’t attempt to emulate human behavior as much as empower an ability that we are so slow at, if capable at all. The output of the AI is tuned to the objective using source-traceable data that can be corroborated, and likely reproduced.

If a learner is relying on AI with non-source-traceable data in the magnitude of 10^11 points (parameters) that is not vetted they are not learning, only parroting. If learning is to reduce us to K-3 level students, then we’ve resigned from the academic process. Learning is a process of dissemination and discernment of information from every direction in time and space.

Bottom line, we should always explore through conventional means and sources anything that AI churns out for us. Peer discussion is a big part of the process of vetting the AI output. A post appeared in the forums recently that had CGPT generate an example of HTML meant to be parsed by a screen reader. It is a really nice piece of markup that applies ARIA role modeling, and any teacher would compliment it.

However, without human review we won’t know if there any deficiencies or omissions, or other possible considerations. Were we to insert this into a template we might generate thousands of pages (documents) that all have this potential misgiving. Those thousands of screen readers might be suffering (as in the user isn’t getting the complete ‘picture’) due to the lack of a missing component.

We need to talk about the application of AI generated markup and how it may end up polluting the WWW with all manner of deficient, unvetted markup. This on top of leaving learners with less and less urge to really dig into HTML and document creation that is done with a conscience.

1 Like

How long does it take for a main sequence star such as our Sun to emerge from the protostar phase?

It takes about 10 million years for a main sequence star like our Sun to emerge from the protostar phase.

How long, then, would it have taken the accretion disk to disperse to what it is now around our Sun?

It is estimated that the accretion disk around our Sun dispersed within the first 10 million years of its formation.

Would the gas giants have formed by then, or were they later?

It is believed that the gas giants in our solar system formed later, after the accretion disk had dispersed. This process took place over the course of tens of millions of years and involved the accumulation of gas and dust to form the massive planetary bodies we observe today.

How long did it take for the rocky planets to begin forming from the dispersed accretion disk?

The formation of the rocky planets in our solar system, such as Earth, Mars, and Venus, took place several tens of millions of years after the dispersal of the accretion disk. This process involved the accumulation of solid particles and small bodies that eventually coalesced to form the solid, terrestrial planets. The exact timeline for this process is not well understood, but estimates range from tens of millions of years to a few hundred million years after the formation of the Sun.

So there would have been a lot of collisions going on at the time; is there a point when those collisions subsided?

Yes, there was a period of intense collision activity during the formation of the rocky planets in our solar system. As the number of large, solid bodies increased and the total amount of solid material in the system decreased, the rate of collisions gradually declined. Eventually, the rate of collisions became low enough that the final, stable configurations of the planets could be established. This process likely took several hundred million years to complete, after which the rate of large-scale impacts was greatly reduced. However, smaller-scale collisions and impacts continue to occur even today, but they are much less frequent and energetic than they were during the early stages of the solar system’s formation.

Is there a geological record of when some of the more recent collisions (with Earth) would have occurred, in that time frame?

Yes, there is evidence of a number of large-scale impacts in the geological record of the Earth. Some of these impacts have been dated to have occurred billions of years ago, while others are believed to be more recent. For example, the Moon-forming impact, which is thought to have taken place around 4.5 billion years ago, is one of the largest and most well-known impacts in the Earth’s history. Another well-known impact event is the Chicxulub impact, which is believed to have caused the extinction of the dinosaurs 65 million years ago. There are also many other, smaller impact events that have been documented in the geological record, and these provide important information about the early history of the solar system and the evolution of the Earth.

During the heavy bombardment period, what would the atmosphere have been composed of, mainly?

During the heavy bombardment period of the early solar system, the Earth’s atmosphere was likely composed mainly of hydrogen, helium, and other light gases that were present in the solar nebula. It is believed that the intense heat generated by the frequent large-scale impacts during this period would have caused these light gases to escape into space, and that it took billions of years for the Earth’s atmosphere to cool and stabilize. Over time, volcanic activity and outgassing from the interior of the Earth also released heavier gases, such as nitrogen and carbon dioxide, which gradually built up to form the present-day atmosphere. The exact composition of the early Earth’s atmosphere is still not well understood, but it is believed to have been much different from the present-day atmosphere, which is dominated by nitrogen and oxygen.

Is there a recorded time line, or geological record of the various atmospheres of Earth since the heavy bombardment?

Yes, there is a geological record of the changing atmospheres of the Earth since the heavy bombardment period. This record is preserved in a variety of ways, including the isotopic composition of atmospheric gases trapped in ancient ice cores and in minerals, the abundance of certain isotopes in ancient rocks, and the fossil record of ancient life. From this evidence, scientists have been able to build a rough timeline of the evolution of the Earth’s atmosphere over the past few billion years.

For example, it is believed that the early atmosphere was composed primarily of light gases, such as hydrogen and helium, which were gradually replaced by heavier gases, such as nitrogen and carbon dioxide, as the Earth cooled and volcanic activity increased. Over time, the evolution of life on Earth also played a major role in shaping the atmosphere, as photosynthetic organisms released oxygen into the atmosphere and changed the composition of the atmosphere over billions of years.

While the exact timeline of the changing atmospheres of the Earth is still the subject of ongoing research, these records provide valuable information about the history of the planet and the conditions that existed in the past.

Given what our Earth looks like from space today with its present atmosphere, what might it have looked like in its early formation, and how would that have evolved; i. e., was it ever a different color?

The appearance of the early Earth during its formation would have been quite different from what we see today. In its earliest stages, the Earth was likely covered in a cloud of gas and dust that would have obscured the solid surface. As the solid Earth began to form and the atmosphere began to cool and stabilize, the cloud would have cleared, revealing the solid surface of the planet.

It is believed that the early atmosphere of the Earth was dominated by gases such as methane, ammonia, and water vapor, which would have given the planet a hazy and yellowish appearance, due to the presence of methane. Over time, volcanic activity and outgassing from the interior of the Earth released heavier gases, such as nitrogen and carbon dioxide, which gradually built up to form the present-day atmosphere.

As the atmosphere evolved, so did the appearance of the planet. For example, the introduction of oxygen into the atmosphere by photosynthetic organisms changed the color of the sky from yellow to blue, as the oxygen in the atmosphere absorbed and scattered the red light from the sun and reflected blue light back to the surface of the planet.

In summary, the early Earth was likely much different from what we see today, both in terms of its atmosphere and its appearance from space. Over billions of years, the planet has undergone a complex evolution, shaped by a variety of processes, including the formation of the atmosphere, the evolution of life, and the activity of the solid Earth.

One last question, going back to the Sun, or Main Sequence stars similar to it, in particular, did our star at one time have ’jets’?

Yes, it is believed that young stars, including the Sun in its protostellar phase, can produce jets of material. These jets are thought to be produced by the intense magnetic fields and accretion disk surrounding the young star, which can channel material away from the star and into space. The presence of jets in young stars is a common phenomenon and has been observed in many protostellar systems.

In the case of the Sun, it is believed that the protostellar jets that were present in its early history would have played a major role in shaping the early solar system. The jets would have expelled material from the Sun’s accretion disk, potentially helping to clear the way for the formation of the planets. They may also have contributed to the formation of the Kuiper Belt, a region of the solar system beyond Neptune that is rich in small, icy objects.

Overall, while the exact details of the protostellar jets produced by the Sun and other young stars are still the subject of ongoing research, they are thought to have been an important part of the early evolution of the solar system and other planetary systems.

Thank you. This has all been most helpful. I will follow up with all due diligence to verify this information. It is a sensible course, and working with verifiable information makes the most sense.

You’re welcome! I’m glad I could help. If you have any further questions, feel free to ask. Verifying information and staying up-to-date with current scientific knowledge is always a good idea, and I encourage you to continue your research. Good luck!

FTR, I use the pronoun, ‘Her,’ since I’m more used to conversing with my wife than with my neighbor.

If StackOverflow can have a policy, then we should too.

1 Like

I’ve seen evidence that shows that ChatGPT manipulates answers, especially around politically charged topics ranging from demographics to environmentalism. A meme right now is the so-called “DAN - Do Anything Now” which manipulates ChatGPT into providing information it would otherwise “refuse” to share. Using this DAN concept, hundreds (at least, likely much more) of examples of intentional misinformation have been revealed where ChatGPT will claim data on a given topic doesn’t exist, but as “DAN” it will provide extensive information.

ChatGPT seems to have been developed from the ground up to be heavily censored and manipulated to give a narrow, politically acceptable set of responses to queries. This manipulation means that as a source of information, it can’t be trusted. There is no way to know what criteria is used to determine when an answer is unacceptable, or when intentional misinformation is being presented because the algorithm behind the chatbot decided the truth was unacceptable.

This doesn’t even get into the ethics of passing off AI responses as your own work, either in creative arts (so-called “AI Art”) or in the academic sphere (using AI like ChatGPT to provide information and claim you found the answer yourself). It makes the “don’t use wikipedia as a source” debate look like a playground spat in comparison. At least with wikipedia, you have access to the source material (or a clear warning that there is no source material available). You get no such assurance with ChatGPT to be able to double-check the accuracy of its statements.

1 Like

Yep, as I posted above…the problem(s) with using large LMs. One of the four risks of using LLMs:

" 1. AI can misinterpret the meanings of words and phrases (what Gebru et al. call “illusions of meaning”), and the fear is that the models will create and spread misinformation"
and, the study:

Also, concerning artists and style mimicry:

There’s also a really disturbing article in Time magazine that I had posted just now and then removed b/c it’s more than disturbing. It’s about the Kenyan workers that OpenAI used and paid less than $2 to ‘make ChatGPT less toxic’. I’ll let users find that on their own.

1 Like

I would be more inclined to support an AI analysis of each source than of the aggregate of sources rolled into a single respronse.

This is an excellent read:

“Please do not conflate word form and meaning. Mind your own credulity.”

1 Like

Artificial people will always have less at stake than real ones, and that makes them amoral actors, he added. “Not for metaphysical reasons but for simple, physical reasons: They are sort of immortal.

Daniel Dennett


From the article I posted above:

"There’s a narcissism that reemerges in the AI dream that we are going to prove that everything we thought was distinctively human can actually be accomplished by machines and accomplished better,” Judith Butler, founding director of the critical-theory program at UC Berkeley, told me, helping parse the ideas at play. “Or that human potential — that’s the fascist idea — human potential is more fully actualized with AI than without it.” The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.” There’s a technological takeover, a fleeing from the body. “Some people say, ‘Yes! Isn’t that great!’ Or ‘Isn’t that interesting?!’ ‘Let’s get over our romantic ideas, our anthropocentric idealism,’ you know, da-da-da, debunking,” Butler added. “But the question of what’s living in my speech, what’s living in my emotion, in my love, in my language, gets eclipsed.”


"So much possibility, so much power. What are we going to use it for? “The point is to create a tool that is easy to interface with because you get to use natural language. As opposed to trying to make it seem like a person,” said Elizabeth Conrad, who, two years into an NLP degree, has mastered Bender’s anti-bs style. “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”
Blurring the line is dangerous. A society with counterfeit people we can’t differentiate from real ones will soon be no society at all. "
(my emphasis added)


My biggest concern is the mountain of unvetted information that is going to continue to grow at an unheralded rate. It is critical we stem the tide where it can do the most damage, as in these forums. This is not a laughing matter if anybody is ever going to be able to trust the forums and the people they engage with.


Don’t drag fascism into the argument, its a very distinct ideology from the issues at hand with AI. People who immediately jump to “muh fascism” simply don’t have an argument and use what they don’t care to understand as a scapegoat.

Specifically, the idea that AI is “fascist” is counter to the very core of every form of fascist philosophy, because fascism is ultimately an ideology that focuses on humanity as the superior being, ie. the idea that humans are inherently superior to artificial constructs (such as computers), and to non-human life (which is no different from modern western philosophy, when you actually think about it for more than 2 seconds. Why else do we as a society accept that eating meat is okay but cannibalism is wrong? Because our worldview says that humans have more inherent value than other life). The modern misconception of fascism comes from several intentional and unintentional misinterpretations of one fascist-adjacent ideology while purposefully ignoring the much broader range of thought. This quote perpetuates that misconception to make a cheap and quick emotional argument without actually addressing the real issues with these AI technologies.

Under fascist reasoning, AI can never be equal to human because it has no context to build upon. Humans are superior because we can create, we can invent, we can think, and we can feel. Humans are the only known things that have the ability to think in the abstract, and you can not replicate that with a computer. The very concept of a society or culture is a wholly human invention, something that no other being (organic or artificial) has the capacity to create, and that’s what makes us so great. Everything we do has the weight of all of that history behind it. An AI not only is incapable of abstract thought, but it also has no way to adjust its decisions to account for cultural variables or consider the many different levels of social structure that a human can process (ie. family, tribe, community, nation, and world, taking into account the past, present, and future of all of these).

This very divide is even more ironic in that the same person calling AI fascist is saying “A society with counterfeit people we can’t differentiate from real ones will soon be no society at all.” which is a line of reasoning built off of fascist principles. The idea that such a thing as “counterfeit people” can even exist comes from a line of thought heavily based on fascism, and completely contradictory to the modern liberal/humanist line of thought.

Meanwhile, the use of AI in the way the proponents of ChatGPT envision it is a natural evolution of post-modern liberalism, a political philosophy that (despite the misconceptions held by a majority of americans) relies inherently on the currently existing capitalist economic system. The idea of an AI that can take over vast portions of the social environment is one that only exists because (mostly) well-meaning and very smart people believe that its not only morally acceptable, but to not do so is unconscionable. These proponents argue that not allowing something like ChatGPT to infest our society at scale is somehow doing great harm, despite the very clear danger it poses, and the damage it is actively causing (which you’ve provided so many examples of).

This isn’t to say that liberalism as a philosophy is inherently evil and fascism is inherently good, both are merely collections of different thoughts and ideas, and both have many different forms with their own quirks. What I am arguing is that the quote you provided is, at best, ignorant of the meaning of these philosophies and at worst, knowingly dishonest.

I am not the source of the quote (Judith Butler is). So, don’t kill the messenger here.

I think the quote fits within the article and the person’s critical (theory) analysis of the current situation of chatgpt’s place in society. (FTR–I’m also not a person who throws around the term fascism for the heck of it. I agreed with what the person said. (knowing what I know about the problems w/LLMs)).

I think some key takeaways are who benefits? what are we going to use it for? Can something like this be used for good and benefit everyone in society?
Things like chatgpt aren’t going to fix healthcare like Sam Altman posted on twitter a couple weeks ago.

I just stumbled upon this article and am going to read it:

1 Like

In my opinion the biggest problem as that the information that comes out is not monitored enough. It is very dangerous for your personal info to get into chatGpt, because the it would circulate very, very rapidly. What happens when some ten year old boy trusts some bot with his personal info, and is now at the expense of someone who is willingly to use that personal info

1 Like

I 100% agree that that is the biggest issue people face, especially kids who know no better. I can see where traffickers will benefit from this, which is extremely scary for our upcoming generation

1 Like

agreed. That Kenyan workers thing is extremely scary >This is extremely dangerous is the hands of an evil company