Stories like this make me wonder why I even bother showing up for work.
To respond to their claims that AI is a tool like a calculator or a Google search, and those who want to get ahead will get with the times – no. It’s not. To use a calculator, you need to know the math concepts. The calculator won’t work for you if you don’t know what you’re using it for. To make the most of a Google search (or a library catalog search, etc.), you need to know enough to enter the right search terms, and to assess and use the materials you find, and to tweak the search if needed (and it’s definitely needed, because AI has made Google so much worse). In both cases, you have to know how to do the work. This is in no way like either of these forms of tech help.
I didn’t click on the article, because the quoted material was disturbing enough.
I really don’t like this late-stage capitalism where making money seems more important than having any ethics.
Articles like this highlight, to me, how important it is to teach our young people critical thinking. Not only because of the AI piece of it, but — even more foundationally — how monumentally stupid it would be to put a piece of software on your computer that has full access to your digital life (all of your files, all of your online account passwords, your computer’s camera and microphone … everything … 24/7) when that company is run by people with garbage ethics whose explicit goal is to help everyone cheat on everything? When this company has trouble raising its next round, what do you think they’re going to do with your data? In the meantime, if they’ve been busy vibe coding their way to the bottom of the ethics barrel, what confidence would you have that they would know how to keep your data secure? So, so dumb.
Yes, I laughed at Lee’s claim that there was public pushback against the use of search engines. He’s making up history.
“If you look at any technology in the past that has significantly improved the capabilities of humans, whether that’s calculators and Google search, they were all met with an initial push back of, ‘hey, this is cheating,’ and a significant group of people who are moralizing against it,” he said.
And ethics!
I have been passionate about computers and software since I was 10. It deeply saddens me that these days, so many of my fellow technologists have no qualms about using technology in unethical ways. If it makes money, or produces high valuations for their start ups, it seems anything is acceptable.
Drop-out suggests a voluntary action. However, in Lee’s case, he is under suspension from Columbia. Prior to this, his admission offer from Harvard was rescinded.
Sounds like he ended up exactly where he was headed all along: cheating, but for a profit.
I wouldn’t be surprised if jail is the next stop.
Right? Some kind of financial fraud or improper valuation?
For reasons I can’t get into outside of the political forum, I think that we are in a period of history when having ethics is considered a weakness, and anything that makes money is considered hunky dory - something to be lauded. It’s not a young-person thing. It doesn’t bode well for our society, unfortunately.
If not Cluely, it will be some other company. Cluely has many competitors as a simple google search will show, not sure any are marketing their offerings as enabling ‘cheating’ though, even though many of the AI tools being developed will certainly allow its users to cheat.
I don’t really see the example of using the glasses on a date to be ‘cheating’, certainly odd though. I did see a demonstration of AI glasses used when shooting pool…little arrows and alignment markings show the player exactly where to strike the cue ball. That’s definitely cheating, at least if in a real competition. (AI glasses have been around for at least a decade already.)
Back to the article…the whole hacking the Amazon interview as an academic integrity violation? That doesn’t really seem to be an academic integrity issue, although certainly an ethical one. Do others see hacking a technical interview as an academic integrity issue within Columbia’s purview? It’s interesting that Amazon noted they didn’t communicate with Columbia, so that whole situation seems curious.
The whole zeitgeist of AI is so pervasive and often controversial right now, and I expect that’s not going to change anytime soon. Will we consider that companies are ‘cheating’ when they eliminate jobs as AI becomes more entrenched in our lives and able to do some things cheaper/faster/more efficiently than humans? I’m not necessarily supporting the young founders of Cluely, but there are tens of thousands of similar people in the world right now running similar startups and/or divisions within large corporations with similar goals. The real issue is AI is more powerful than any of its creators, and no one will be able to control how their programs/tools are used once out in the world.
I am surprised the cofounder says using AI was not prohibited by Columbia’s academic integrity guidelines:
“ For Lee, he said Columbia’s reaction to the program they developed and then tested, with the Amazon interview process, further questioned the purpose of why he was in college.
“You guys are supposed to be an Ivy League school that champions the future generation of leaders, yet, this is how you respond to a student using AI that in no way violates anything in the student handbook?” he charged.
Lee asserts that he and Shanmugam were careful to use their program in a way to ensure they weren’t breaking any university rules.
“Before we even built the tool, we did a very deep dive through the student handbook to make sure that we didn’t violate any academic integrity policies,” he said. “And I mean, like technical interviews… It’s not under the jurisdiction of Columbia. So I’m just very surprised that they reacted to it.”
KTVU reached out to Columbia University for a statement, but a spokesperson declined to comment, citing the Family Educational Rights and Privact Act, or FERPA, a federal law designed to protect the privacy of student education records.
Exactly. And in a world devoid of morals and ethics, yikes.
Articles like this, along with some of the decisions people have made in the last decade, make me think that ethics should be a required college class, if not for all majors, then certainly for anyone majoring in tech, business (or econ at schools that have no business majors), and political science.
My only defense of this bozo is that yes, people did object to Google reliance (and some still do) - when it would be more educated and more involved to go to real sources and do more legwork. But I don’t remember anyone saying Google was “cheating”.
I saw a news story the other day that indicated to me where we’re headed if everyone just uses AI for their “research”. There is a problem with nitrate pollution in some drinking water in Illinois. Nitrate (NO3-) is an ion that is very stable physically and chemically when dissolved into water. If you boil water, the stable ions will be left behind and they’re more concentrated.
All this is to say, the newscasters said, “boiling water just makes the problem worse, because heat makes nitrate more dangerous”. In fact, they must have googled heating nitrates, because in chemistry slang, “nitrates” can refer to metal nitrates like Lead Nitrate. Indeed, if you heat a solid metal nitrate (above about 200 C), it loses some of the oxygen and can become a metal nitrite which in a few cases is dangerous or even explosive. Or other gases can be released like NxOy. Again, if you heat the dry metal nitrate above 200 C.
So while it’s true that boiling the Illinois water will concentrate the ion and be if anything less safe to drink, the newscaster totally misconstrued the chemistry and just parroted what seems to me to be an AI garble.
I worry quite a lot that people will jump on the bandwagon that using AI is somehow giving them an advantage, and then dumb errors will propagate with ill effects. And this is on top of the huge ethical problems, risks to privacy, environmental impacts of all the power use for AI, etc.
OK, but do you think a class on ethics in college (or even high school) would effectively improve those people’s ethics?
I think it would at least open their eyes to ethical issues and to moral questions and their potential impacts. As some politicians like to say, college is not there to indoctrinate, but it should be a place to create exposure and to introduce subjects.
Just as people who learn about nutrition and healthy practices end up making different decisions, at least at that point it’s an informed decision when they opt for less healthy options over healthier ones. An ethics class is no guarantee that people’s ethics will improve, but at least any decisions made will have a more informed background related to ethical considerations. And I think it would be foolhardy (and arguably unethical) to do nothing when we could do something that might improve the situation.

Will we consider that companies are ‘cheating’ when they eliminate jobs as AI becomes more entrenched in our lives and able to do some things cheaper/faster/more efficiently than humans?
No, I don’t consider that cheating.
It’s also not cheating when a student uses Google or AI to prepare for a test. Similarly, an employee using AI to work faster or more efficiently isn’t cheating either.
Cheating is when someone uses information they’re not supposed to have access to, like during an exam or interview. Or when they get outside help that isn’t allowed. (Cluely’s software violates both conditions).
To me, the line isn’t all that blurry.

I worry quite a lot that people will jump on the bandwagon that using AI is somehow giving them an advantage, and then dumb errors will propagate with ill effects.
I see this all the time in my (admittedly very low-stakes) history classes. Students have jumped on the AI bandwagon regardless of what garbage it spits out. I can tell them over and over that fake quotes in a paper (for example) are an obvious sign of AI usage and an automatic zero, but they turn in these assignments anyway. Not even a clear warning of failure on an assignment or in the class is enough to take away the temptation. They don’t even bother to double-check their AI-generated work for nonsense. I’m not even sure they would be able to recognize the nonsense if they found it.
And when these students are one day in a position in which their work has a clear impact on other people? What then?

I think it would at least open their eyes to ethical issues and to moral questions and their potential impacts.
It would probably resonate with some, but not enough.
In an adjacent situation, I was recently at a conference for college counselors. Lots of sessions on AI, and I attended a few. One of the presenters of one of the sessions was a top AI leader/academic at a top college. Their focus was on helping counselors understand which colleges have strength in AI thru majors/minors and what type of AI their focus is. Towards the end I asked to what extent if any these programs are teaching about the environmental impact of AI (which IMO is the most scary aspect of all of this.) The person’s answer was they don’t know any school that covers that at all. Sigh.