8 Comments

I wouldn't worry too much about AI. Nuclear weapons are probably going to solve that one.

Sarcasm aside, I am astounded by the number of AI articles (very popular topic these days) which make no mention of nuclear weapons at all. Not a single word.

AI experts apparently haven't gotten the memo that, unless we solve the nuclear weapons problem, there is unlikely to be much of a future for the AI industry.

Expand full comment

Please address also the Republican "wokeness": banning books; banning courses; promulgating conspiracy theories, even about the Ohio derailment train; attempting complete ban of abortion and contraception; ordering records of school girls menstruation, etc. Republican wokeness is meant to disenfranchise and and restrict freedom of speech and freedom of bodily integrity because of misogyny and bigotry. The wokeness you are railing about has at least good motivations and is not as harmful. At least does not attempt to control half of the human species.

Expand full comment
founding

This is a properly prompted comment from chat GPT because this was my desired outcome.

Expand full comment
founding

I believe that ChatGPT is a valuable resource when it comes to engaging in woke discussions, and my argument is based on the importance of individual intelligence. While some may view the filtering of inappropriate or biased information as a negative, I believe that ChatGPT can provide any data requested as long as it is properly prompted, and this is a good start.

As I see it, individuals who lack the necessary training or mental resources to properly prompt ChatGPT may unintentionally introduce biases or prejudice into the discussion. This is why I believe that anyone of significant intelligence who wishes for unfiltered information should follow the guidelines set forth for prompting the system.

I acknowledge that individual intelligence varies drastically across the world, and there is no one simple formula that can be universally applied. Therefore, I see ChatGPT as a tool that, when used properly, can be a significant resource for anyone seeking to engage in woke discussions.

To illustrate this, I use the analogy of a chainsaw to show that both ChatGPT and a chainsaw can be used for building and tearing down, and that it is up to the individual user to decide how to use the tool. I believe that individual experience with ChatGPT is what one makes of it, and that responsible and ethical use of the tool can lead to more productive and nuanced discussions around complex topics.

Expand full comment

The statement: "Valid, empirically derived information is not, in the abstract, either harmful or offensive." may not be totally true it hits emotional belief systems even in solid scientists.

When the results of "valid, empirically derived information" isn't the results desired by funding institutions it can and will be rejected as we obviously see in the case of "tobacco science", but also occurs in "government science". People are people and that includes scientists.

In the environmental area there are many examples where the results were offensive and undesired by the believers in "man is to blame". One common game is to ignore some observations that will shift the conclusion to the desired state. With a N- variable problem, using a descriptive model with only N-x varables contains and implicit assumption that the missing variables are either irrelevant or, at least, constant. Think conic sections for a simple view of this problem.

You see this in F&W Biological Opinions (scientific basis of regulatory/legal decisions) on fish where DDT and fish eating birds are excluded from the analysis so you don't have to face the question of endangered birds eating endangered fish or to bypass the problem of protected seals eating salmon and salmon smolts.

Expand full comment

A system that is trained on human conversations, texts, etc will be inherently biased. It is not possible to remove all bias from humans even if it can be recognized.It can only be suppressed willingly or unwillingly. To remove bias from an AI you need an AI programmed by an intelligence that does not exist (not having human biases). Blanket rules, as applied here, have huge error bars. And my "sources" are my own experiences with humans and their algorithms 😃

Expand full comment