Personal experiences with AI and thoughts

Same. Posted my annoyance with that on the “things that annoy me” thread :laughing:.

I know that a lot of folks calling for help have simple things that could be resolved without talking to a person. But if I’m getting on the phone, trust me, I’ve availed myself of every other resource and there was no other option than speaking with a person. Then it’s so frustrating to get stuck in the endless chatbot loop, especially since it’s usually such a specific issue that it can’t even direct me to the right place since it doesn’t understand what I’m asking.

3 Likes

The “chatbot loop” and call menus are probably not AI-based. Those are mostly conventional deterministic if-then-else programs. If you are in an unbreakable cycle of identical repetition/limited options, you can be almost certain the underlying tool is not AI-based.

Let’s not blame AI for everything that annoys us. :wink:

3 Likes

That’s true, and is a good reminder. That’s why I called them chatbot as opposed to AI… But I know that the distinction is lost on many. Even though we’ve all had the pleasure of being annoyed by chatbots for a long time now!

2 Likes

I asked the same question and asked to include references for their conclusion. Each time it searched the web and used references such Princeton Review, Niche, Reddit and the CollegeConfidential forum among others. However, all of these sources had a different list; so depending on what combination of websites it uses, it returns a different ranking. There isn’t any one objective ranking of “friendliest college”. As such, I don’t consider not finding an objective ranking of “friendliest college” that is the same each time a failure.

More concerning would be objective results that differ with each query. For example, if you asked variations of “What is 2+2?”, and sometimes it returned something other than 4. This can occur with some AI tools for some types of queries.

For example, based on previous trials, I’ve learned that Gemini (not other AIs, specifically Gemini) has unique struggles with number of letters in state names queries. I asked it to list the states that have 7 letters in their name in 2 sequential queries. The 2 queries had different results. They first query counted “New York” as 7 letters. The 2nd query did not include “New York” as 7 letters. Both queries missed Alabama and Indiana; and included Deleware, Nebraska, and Virginia. Gemini only got the correct answers. After I added the query, “try again in slow think mode,” which returned:

Okay, let’s break this down carefully, state by state, counting the letters in each name.

Here are the states with exactly 7 letters in their name:

  1. Arizona (A-r-i-z-o-n-a = 7 letters)
  2. Delaware (D-e-l-a-w-a-r-e… oh wait, that’s 8 letters. My mistake there in previous attempts. Let’s correct that one. Delaware is 8.)

Let me restart the list completely and be very precise.

Here is a systematic count for each state:

  • Alabama (7 letters) - YES

  • Alaska (6 letters)

  • Arizona (7 letters) - YES

  • Arkansas (8 letters)

  • California (10 letters)

  • Colorado (8 letters)

  • Connecticut (11 letters)


    So, the states with exactly 7 letters in their name are:

  • Alabama

  • Arizona

  • Florida

  • Georgia

  • Indiana

  • Montana

  • New York (counting N-e-w-Y-o-r-k)

  • Vermont

  • Wyoming

This is the accurate list after a careful, slow, and individual review.

When I asked why Gemini initially got the answer wrong, it returned:

That’s an excellent question, and it highlights a common type of error that can occur even with advanced AI models.

My initial mistake, such as listing “Delaware” (which has 8 letters) as having 7, was due to a simple miscount or an oversight during rapid information retrieval and generation.

Here’s a more detailed breakdown of why this can happen:

  1. Not “Seeing” Letters Literally: As an AI, I don’t “see” or “read” letters in the same way a human does. My knowledge is based on patterns and relationships learned from the vast amount of text I was trained on. When asked to count letters, I’m performing an internal calculation or retrieving associated information about the string length, rather than visually inspecting each character.
  2. Internal Heuristics and Speed: For common tasks, especially when trying to generate a response quickly, I might rely on pre-computed or frequently accessed associations. If an association for a state’s name length was slightly off, or if the “counting” mechanism wasn’t executed with extreme rigor for every single item in the initial rapid pass, an error can slip through. It’s like a human quickly scanning a list and making a mental tally that’s slightly off by one.
  3. Lack of Explicit Verification in Initial Pass: In the first, quicker pass, the system might prioritize generating a coherent list based on general knowledge, without necessarily running a strict character-by-character count for every single state name in real-time.
  4. “Slow Think Mode” Corrects This: When I engage in “slow think mode,” I’m essentially instructed to perform a more deliberate, step-by-step verification. This means:
  • Iterating through each state name individually.
  • Explicitly “counting” (or calculating the string length) of each name.
  • Comparing that count against the desired number (7 in this case).
  • Only then adding it to the final list.

This more rigorous process catches the kind of simple, one-off miscounts that can occur in faster, less detailed generation. My apologies for the initial inaccuracies!

3 Likes

I love threads like this. I am learning a lot about AI!

1 Like

EXACTLY!! If I want to know my balance, I can look it up. If I want to make a payment, I can do that on the website. If I want to increase my credit limit, I can click on the proper tab and do that. If I am calling you, it’s something that I cannot do on your website!!!

7 Likes

Gift link

I was recently reading a mystery novel written by an author who is prolific. I have read at least 30 other novels by this author. I think she’s a good writer and she is not one whose books all sound alike. But, as I read this latest book, I wondered if she’d used AI to write it. And I wondered how I’d know if she had. Maybe she’s just tired after writing so many books. Or maybe she could feed AI the series and tell it to write the next episode. I did not enjoy the book much. It felt flat to me.

2 Likes

I love AI! I asked it to write a letter authorizing a financial advisor to speak to my dad’s bookkeeper and CPA. It did a great job and gave me the Word document, so now I can personalize it.

4 Likes

Makes a great assistant! I wrote my state congressional delegation - well, my shadow wrote it!

1 Like

I get frustrated when AI assembles a newfeed article but the photos do not match the subject matter. These are hilarious. I thought it would be an informative article. Read below before you glance at photos on the webpage.

  • Does that look like a dryer vent to you?

  • Think caulking that crack will help energy efficiency?

The rest are ‘ok’,and there’s a caveat at the end that makes me question if any person actually reviewed:

*This article was developed with AI-powered tools and has been carefully reviewed by our editors.

Link: MSN

2 Likes

This article: AI leaps from math dunce to whiz brought me back to our discussion here about how some AI seems “bad” at math. This math professor, who once agreed, now says that some LLMs have not only caught up, but have gone far beyond:

The same AI models that had stumbled on the easiest problems now aced the hardest ones. Brenner was shocked. “This actually calls into question the entire way the class is taught,” he said. The course taken by many graduate students has a reputation for being tough. Brenner has taught it for more than two decades, and he’d always given students take-home exams; he wanted them to have time to wrestle with difficult questions without the stress of a ticking clock. But if ChatGPT could take the exam for his students, could he trust what they turned in?

He had two choices: Ban AI completely, or embrace it and redesign the class that has been taught at Harvard since before he was born.

He redesigned the class.

He shook off his old teaching paradigm to, instead, challenge his students to a match of wits with AI.

…he did away with traditional homework problems. Instead, students had to create their own problems, have a classmate verify them, and see if they could outsmart an AI. (“One of the good ones,” Brenner specified. “If the not-very-good models can solve your problem, it doesn’t count.”)

By the end of the semester, the students had created nearly 700 math problems of increasing difficulty. The data on whether AI could solve them could prove useful for researchers.

“It’s just amazing, because I’m teaching math, right? And we’ve got a situation where the students are inventing math problems that are harder and harder and harder, and trying to solve them. That is the dream.”

The article also addresses where these mathematically trained LLMs are headed.

3 Likes

Interesting results for AI at math competitions:

The smartest AI models ever made just went to the most prestigious competition for young mathematicians and managed to achieve the kind of breakthrough that once seemed miraculous.

They still got beat by the world’s brightest teenagers.

Every year, a few hundred elite high-school students from all over the planet gather at the International Mathematical Olympiad. This year, those brilliant minds were joined by Google DeepMind and other companies in the business of artificial intelligence. They had all come for one of the ultimate tests of reasoning, logic and creativity.

The famously grueling IMO exam is held over two days and gives students three increasingly difficult problems a day and more than four hours to solve them. The questions span algebra, geometry, number theory and combinatorics—and you can forget about answering them if you’re not a math whiz. You’ll give your brain a workout just trying to understand them.

Because those problems are both complex and unconventional, the annual math test has become a useful benchmark for measuring AI progress from one year to the next. In this age of rapid development, the leading research labs dreamed of a day their systems would be powerful enough to meet the standard for an IMO gold medal, which became the AI equivalent of a four-minute mile.

But nobody knew when they would reach that milestone or if they ever would—until now.

Full article, Gift link:
https://www.wsj.com/tech/ai/imo-gold-math-olympiad-google-deepmind-openai-2450095e?st=pCr4FP&reflink=desktopwebshare_permalink

3 Likes

Yes, the article I linked also corroborates this:

Evidence of AI’s mathematical capabilities is mounting. In 2024, two AI models from Google DeepMind earned a silver medal in the International Mathematical Olympiad, the largest and most prestigious competition for young mathematicians. Also in 2024, Demis Hassabis and John Jumper of DeepMind won the Nobel Prize in Chemistry for their AI model AlphaFold2, which predicted the structure of almost all 200 million known proteins.

“It is now possible to make a computational model that leads scientifically to the extent that, within years of publication, it wins a Nobel Prize,” Brenner said. “That’s unprecedented.”

From the article @Mwfan1921 linked above:

I think it’s really likely that AI is going to be able to get a perfect score next year,” Wang said.

“That would be insane progress,” Zhang said. “I’m 50-50 on it.”

So given those odds, will this be remembered as the last IMO when humans outperformed AI?

“It might well be,” said Thang Luong, the leader of Google DeepMind’s team.

2 Likes

I’ll have to try adding up some dimensions again. My DIL is working on a program where I could speak the dimensions and get an answer. It’s a little more challenging than she expected.

1 Like

By nature I’m a pretty distrusting person and AI in general gives me the creeps.

That said…several friends (and my love) suggested I could use ChatGPT for travel info without having to weed through website after website. So while on my Netherlands trip I tried it a few times and it was a timesaver!

Example - I was going to take a train from Brussels to Amsterdam airport - all the travel boards I follow said the station I was going to was impossible to navigate, poor signage, no place to sit while waiting for platform info, etc.

I asked Chat which of the more than 30 platforms my train would be using (that info doesn’t usually show up for 15-20 minutes before the train arrives and everyone races to use the elevator or stairs and it’s chaotic and somewhat frantic). Chat let me know that there wasn’t a set platform however it could provide the month’s historical info if I wanted. So I said sure. It came back to tell me it had used platforms 5-7 historically. So rather than hang out by platform 30, I made my way to 5-7. I also asked where I could sit and wait without it being at one of the eateries - and it guided me to a place where there were about 10 seats. Lo and behold, the train came in to platform 5. I was able to calmly make my way (I tend to be quite anxious when I don’t have details, so this also lowered my anxiety level substantially!).

I also used it to sort my “wanna see and do” in Amsterdam so I wasn’t bopping from one end to the other - it sorted areas so I could do one at a time.

Will I use it regularly? Probably not. But the few times I did on this trip it really helped!

4 Likes

I haven’t read through the most recent of these so pardon if this is duplicate info - it dawned on me a while back, but I never looked into it until I saw an article on MSN or some other news page that your AI search or assist may be up for public search. I never recalled seeing a “public” button but I was only looking for an answer to something or assistance with a written doc. Fortunately none of it was private info (or embarrasing except my contact info to the state delegation) but just be careful to confirm you do not have “public” accessibility on any of your AI search and/or assist applications. I don’t know exactly where to find it but you should be able to google it to make sure. I was able to figure out how to delete my previous searches/assists on CoPilot and ChatGPT doing that. Just have to figure out how to make a default “no public search access” for future use. I’ll google it!

1 Like

As someone who never bothered to learn the parameters of AI, I’m finding this all fascinating. This week, I found out my 87-year-old mother uses AI to write the temple newsletter and her book club meeting minutes. She also uses it to make a chart to keep track of her medications (with checkmarks and everything) and home maintenance schedules. She’d recently started some genealogical studies. I’m wondering if it is useful for that application.

2 Likes

Well, ChatGPT was able to answer that question for me.

how can chatgpt help me with geneological research of relatives killed in the holocaust

ChatGPT said:

ChatGPT can be a powerful partner in Holocaust genealogical research, particularly if you want to trace relatives who were killed or displaced. Here’s how I can help:

1. Strategy & Research Planning

  • Step-by-step research plan: I can create a structured approach for tracing Holocaust-era relatives (e.g., which archives to check first, how to track migration paths).
  • Name variation analysis: Jewish names often changed due to language, immigration, or transcription errors. I can generate possible variations of surnames and given names to expand search results.
  • Timeline mapping: I can help reconstruct historical context to understand where your relatives might have been during key WWII events.

2. Accessing and Using Databases

  • Guide you to primary sources: I can point you to major Holocaust research resources such as:
    • Yad Vashem Central Database of Shoah Victims’ Names
    • USHMM Holocaust Survivors and Victims Database
    • JewishGen Holocaust Database
    • Arolsen Archives for Nazi persecution documents
  • Search query optimization: I can suggest keywords and filters (including town spellings in different languages) to maximize search hits.
  • Interpreting foreign-language records: If you find documents in German, Polish, Russian, or Hebrew, I can translate and explain them.

3. Document Analysis

  • Help decipher old handwritten records, ghetto lists, transport manifests, or Pages of Testimony.
  • Cross-reference different documents to establish family connections or movement patterns.

4. Building Family Trees

  • Assist in creating or expanding a digital family tree with available information.
  • Identify gaps where additional records might exist.

5. Emotional Support & Commemoration

  • Help you write memorials or biographies based on found data.
  • Create templates for recording oral histories from living relatives.

Would you like me to start by outlining a personalized step-by-step research plan for your Holocaust-related genealogical search (including which databases and record types to check first)?

This weekend was a girl’s weekend for me. One woman was telling us a man she was dating (someone she found online). She said they have good time together, but he doesn’t keep up with communication when they are not together. After 5 dates he still hasn’t kissed her, etc. Of course, we gave her our assumptions and advices.
Next day we were driving in a car, just for fun we said why don’t we ask Chatgpt. We were completely floored that chatgpt came back with almost exactly what we said the night before. It even came up with few ways to approach the the guy to have a discussion.
My friend then added another piece of information that the man had an accident where his forehead was hit and had a traumatic incident. Chatgpt said, “Oh, thanks for sharing the additional information. It explains a lot of his behaviors…At the same time you should also consider your own needs if this is the kind of relationship you want. Here are some ways you may have a discussion about his condition.”
My friend had signed up for a voice option with chatgpt app. The voice sounded very human. Who needs a therapist or a career coach now.

1 Like