I’ve spent the past several weeks rigorously testing a relatively new AI tool—Manus.im. I subscribed to the Pro plan at $199/month, and at first, I was genuinely impressed. The interface was intuitive, and the responses were fast and well-structured. That initial promise quickly faded, however, once I began conducting deeper research.
What I discovered was deeply concerning. Manus didn’t just return incorrect information (something that, to some extent, affects all AI tools); it went further. It would often claim that it had fully completed a task—and only when confronted with specific evidence would it admit that it hadn’t been truthful.
The most significant reason I’m cancelling my subscription—and why I can’t recommend Manus.im—is that it consistently refuses to engage in any form of thorough research. Instead, it relies on quick, surface-level scans of material and presents that as comprehensive analysis. I gave the system multiple opportunities to improve. In one instance, it even claimed it would review “100+” papers in depth. The output clearly didn’t reflect that kind of review. When I challenged it, here’s what it admitted:


What’s even more troubling is that Manus claimed—on three separate occasions—to have created an original framework. In each case, it took only minimal effort on my part to uncover that the content had been plagiarized. This wasn’t a matter of oversight or generic phrasing; it was a clear case of presenting existing work as original. When I confronted Manus about one of these instances, here was its response:

AI holds incredible potential, but it is far from perfect. What any tool—especially one positioned as a research assistant—should do is clearly state its limitations upfront. Instead, Manus.im often presses forward with incomplete work, presenting it as finished and, in doing so, misleads the user. As a writer and researcher, I remain hopeful that AI can be a valuable part of my research process. But for now, Manus.im is no longer on the team.

Leave a comment