"21-year-old Ivy League dropouts raise millions, launch Bay Area AI startup to 'cheat on everything’"

I don’t disagree. My issue is that the program that allowed them to hack the interview process isn’t academic in nature and seemingly has nothing to do with Columbia’s academic integrity rules (as I stated above it’s certainly an ethical issue IMO.) If that program is a violation, is it an academic integrity violation if a Columbia student posts all the technical interview questions on Wall Street Oasis? Is it a violation if a student uses those thousands of technical interview posts on WSO to complete their technical interviews?

I am curious about who is providing their investment capital.

There are ethical considerations when lending to industries such as gaming, defense, spirits, alcohol, marijuana, even pornography, etc. In these cases the consideration is typically about the product and its societal implications. In most cases however, no matter how potentially harmful or unethical the product is the ability to generate profits and returns will bring in some investors who can get morally comfortable if they will get paid back at a certain profit level.

In this case however you are investing not just in an “unethical” product but also entrusting founders who define themselves as cheaters. How could anyone ever trust their claims or business plan. By definition they don’t view honesty as a virtue and follow personal moral codes that would allow for shortcuts and deception. The old expression of when someone tells you who they are believe them should apply.

These founders are the very definition of uninvestable yet someone seems willing to trust the cheaters by giving them financial support. I hope the markets give them their just deserts.

14 Likes

The use of AI should not preclude one’s ability to actually THINK. And it should never be used for dishonest endeavors.

But then, I am a huge fan of honesty, and being able to think and reason on one’s own. These are very important life skills, and we should never lose them.

3 Likes

It could give some people more ideas on how to exploit other people’s sense of ethics that they do not share. If they also have learned better critical thinking skills, then they may be even more effective at exploiting others in whatever way they desire.

5 Likes

I can’t speak to what Columbia considers an academic integrity violation, but I don’t have an issue with students posting interview questions online. (LeetCode, for example, is full of technical assessments like the Amazon OA mentioned in the article, along with their solutions.)

To me, it’s no different than studying past SAT questions to prepare for the test.

Applicants are expected to understand the solutions well enough to write the code themselves. Using Cluely or another AI tool to generate the solution isn’t acceptable.

3 Likes

They are going to fail.

“* Two Columbia University drop-outs have raised $5.3M for their company, Cluely.”

“At the end of April, its founders, Chungin Roy Lee and Neel Shanmugam, moved across the country from New York to open their new headquarters in a three-story live-work loft in the city’s Mission District, just south of the SoMa neighborhood.”

The rent on something that large in San Francisco’s Mission district is astronomical, and they’re probably also renting apartments nearby, and, based on everything else blowing through their money. They’re hiring interns at $200 an hour as well.

They’re burning through the money faster than they are raising it. That is not a successful business model. That is the “The Producers” business model.

I’ve looked at their workers and interns, and they are the same kids who come here with a long list of companies “they” founded. They are wealthy kids from places like the Bay Area, with money from their parents, a work force made up of people with overinflated views of how good they are, and the belief that they have a better understanding of AI than people who have been developing it for years.

They are either delusional regarding what their AI can do, or they have a simple business model - they sell the data that the AI collects from people’s computers.

Their software can either do what they claim, in which case, I see lawsuits in the future - fraud is still illegal, and aiding and abetting fraud is also illegal. Or, it will not work, and they will crash and burn. Even if it works and they are not embroiled with fraud, copyright violations, etc, there are already AI detection tools that will render their cheating useless.

We’ve recently seen how companies that claim to be based on AI turn out to be fraud - look at what happened with Builder.ai. That is one reason that I am skeptical of their claims.

"As a student, Lee said he was among a proportion of his peers that had deeply integrated AI into his life, using it to its fullest capabilities. "

Most people like this do not actually understand AI, and are treating it like a magic black box. They then make promises based on their lack of understanding of what AI can and cannot do.

That is a basic fact of tech and AI. Most people in the tech world do not understand AI, do understand what AI can or cannot do, and the tech world is full of people who make promises that they cannot deliver based on their lack of understanding. Like all of the people who think that they understand their cellphones because they know how to use them, these kids believe that they understand AI because they have used AI.

As for the “$5 MILLION!!!”? Well, Builder.ai raised $450 million for an “AI” scam.

Let’s be serious - they couldn’t even trick a professor in Columbia. That does not really indicate that they actually know how to cheat.

7 Likes

Do classes still offer open notes or open book tests? Would AI be considered cheating then?

1 Like

Yes

Yes, because professors typically prohibit the use of AI. Moreover, formulating your own answer to an exam question by poring over hundreds of pages of a textbook or consulting abbreviated notes in a cheat sheet is fundamentally different from asking an AI to draw on its vast knowledge base to provide an answer.

6 Likes

It doesn’t bode well for people who might want to cheat for advantages at getting jobs or greater success in college that the founders’ claims to fame were getting barred from employment and suspended/having their admissions revoked.

Also $5 m in fundraising in the Bay Area is laughably small, especially at that burn rate. These guys are classic dot bomb relics 25 years after the fact. But they wouldn’t know about that because I don’t think they read any history or economics, they just get an LLM to give them answers.

5 Likes

These fellas are odious and the ethics are obviously the most important thing here, but…

Cluely is SUCH a stupid name.

4 Likes

All of my in-person exams are modified open-note (not open book). I allow students to bring a two-sided 8.5x11 cheat sheet, handwritten or typed. I do require them to turn in the cheat sheet, but I don’t try to police AI use on the cheat sheets (it would take far too much time and serve little purpose). My exams are written in such a way that if you really use the cheat sheet as a study tool, it will help you. If you fill it in with a lot of AI-generated notes, it won’t – even with a lot of notes on content, the essay questions still require you to think on your feet. Indeed, the cheat sheets that look AI-generated (I can often tell - there are red flags) don’t tend to lead to better exams. The ones students produced through authentic work really make a difference, though (which I can tell by figuring out comparative averages). I tell my students that beforehand, but alas.

5 Likes

I don’t think it’s a matter of teaching ethics. The real issue is that we live in a world where if someone makes enough money they are regarded as heroes by people and get away with anything. We can teach ethics but when teens see people with NO ethics being elected to public office, becoming stars in the entertainment industry, compensated for athletic talent, etc. it’s hard to convince them that acting this way isn’t right.

I could give many examples, but I don’t think
I even need to.

11 Likes

“What our goal with this is to desensitize everyone to the phrase ‘cheating.’ If we say, ‘Cheat on anything at every possible turn,’” Lee said, “then the next time someone looks at you and says, ‘You’re using AI to cheat, it sort of becomes a nothing. But like, so what? What are you talking about? Cheat begins to lose its meaning.”

Clearly, Lee doesn’t believe cheating is bad — he only thinks it’s bad that others believe it is.

Therefore, his goal is to make cheat “lose its meaning”.

2 Likes

Unfortunately, poor people will be the ones most directly affected, as the data centers will be located in areas where poor people reside (NIMBY). And any existing rules will be ignored because money talks. How do I know? It’s already happening.

4 Likes

We can hope…

These guys could throw themselves at promotion of this event if the startup fizzles out…

3 Likes
4 Likes

I was on an interview panel yesterday for a relatively high-level position. In looking at the performance task of one applicant, it was obvious that ChatGPT or similar had been used. And when the interview concluded, the opinion of the panel was unanimous…no.

8 Likes

Depressing but not surprising. Unfortunately college and HS age kids have been cheating at an accelerated rate and it started well before ChatGPT.

3 Likes

When someone shows you who they are, believe them.

The VCs are giving millions to an unethical person. They shouldn’t be surprised when he later fudges the sales projections, revenues, cash flows, or is sued for sexual assault.

13 Likes