NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Launch HN: mrge.io (YC X25) – Cursor for code review
dimal 46 minutes ago [-]
Looks interesting. I’m a bit confused about how it knows the codebase and the custom rules interface. I generally have coding standards docs in the repo. Can it simply be made aware of those docs instead of requiring me to maintain two sets of instructions (one written one for humans, and one in the mrge interface for AI)? I could imagine that without being highly aware of a team’s standards, the usefulness of its review would be pretty poor. Getting general “best practices” type stuff wouldn’t be helpful.
alexchantavy 4 hours ago [-]
Been using this for https://github.com/cartography-cncf/cartography and am very happy, thanks for building this.

Automated review tools like this are especially important for an open source project because you have to maintain a quality bar to keep yourself sane but if you're too picky then no one from the community will want to contribute. AI tools are like linters and have no feelings, so they will give the feedback that you as a reviewer may have been hesitant to give, and that's awesome.

Oh, and on the product itself, I think it's super cool that it comes up with rules on its own to check for based on conventions and patterns that you've enforced over time. E.g. we use it to make sure that all function calls that pull from an upstream API are decorated with our standard error handler.

pomarie 4 hours ago [-]
Thanks for sharing that Alex! Definitely love having an AI be the strict reviewer so that the human doesn't have to
justanotheratom 7 hours ago [-]
This is an awesome direction. Few thoughts:

It would be awesome if the custom rules were generalized on the fly from ongoing reviewer conversations. Imaging two devs quibble about line length in a PR, and in a future PR, the AI reminds about this convention.

Would this work seamlessly with AI Engineers like Devin? I imagine so.

This will be very handy for solo devs as well, even those who don't use Coding CoPilots could benefit from an AI reviewer, if it does not waste their time.

Maybe there can be multiple AI models review the PR at the same time, and over time, we promote the ones whose feedback is accepted more.

allisonee 7 hours ago [-]
Appreciate the feedback! We currently auto-suggest custom rules based on your comment history (and .cursorrules), however continuing to suggest from history is now on the roadmap thanks to your suggestion!

On working with Devin: Yes, right now we're focused on code review, so whatever AI IDE you use would work. In fact, it might even be better with autonomous tools like Devin since we focus on helping you (as a human) understand the code they've written faster.

Interesting idea on multiple AI models --we were also separately toying with the idea of having different personas (security, code architecture), will keep this one in mind!

justanotheratom 3 hours ago [-]
personas sounds great!
8organicbits 4 hours ago [-]
Line length isn't something I'd want reviewed in a PR. Typically I'd set up a linter with relevant limits and defer to that, ideally using pre-commit testing or directly in my IDE. Line length isn't an AI feature, it's largely a solved problem.
justanotheratom 3 hours ago [-]
bad example, sorry.
pomarie 7 hours ago [-]
These are all amazing ideas. We actually already see a lot of solo devs using mrge precisely because they want something to catch bugs before code goes live—they simply don't have another pair of eyes.

And I absolutely love your idea of having multiple AI models review PRs simultaneously. Benchmarking LLMs can be notoriously tricky, so a "wisdom of the crowds" approach across a large user base could genuinely help identify which models perform best for specific codebases or even languages. We could even imagine certain models emerging as specialists for particular types of issues.

Really appreciate these suggestions!

bryanlarsen 8 hours ago [-]
It looks like graphite.dev has pivoted into this space too. Which is annoying, because I'm interested in graphite.dev's core non-AI product. Which appears to be stagnating from my perspective -- they still don't have gitlab support after several years.
pomarie 7 hours ago [-]
Yeah, noticed that too—what's the core graphite.dev feature you're interested in? PR stacking, by chance?

If that's it, we actually support stacked PRs (currently in beta, via CLI and native integrations). My co-founder, Allis, used stacked PRs extensively at her previous company and loved it, so we've built it into our workflow too. It's definitely early-stage, but already quite useful.

Docs if you're curious: https://docs.mrge.io/overview

bryanlarsen 7 hours ago [-]
Yes, stacked PR's and a rebase-only flow. Unfortunately we're a GitLab shop. Today's task is a particularly hairy review; it's too bad I can't try you out.
pomarie 7 hours ago [-]
Ah, totally get it—that’s frustrating. GitLab support is on our roadmap, so hopefully we can help you out soon.

In the meantime, good luck with that hairy review—hope it goes smoothly! If you're open to it, I'd love to reach out directly once GitLab support is ready.

bryanlarsen 7 hours ago [-]
Email is in profile. You're welcome to add me to your list.
catlover76 8 hours ago [-]
[dead]
eqvinox 4 hours ago [-]
Threw a random PR at it… of the 11 issues it flagged, only 1 was appropriate, and that one was also caught by pylint :(

(mixture of 400 lines of C and 100 lines of Python)

It also didn't flag the one SNAFU that really broke things (which to be fair wasn't caught by human review either, it showed in an ASAN fault in tests)

allisonee 4 hours ago [-]
sorry to hear that it didn't catch all the issues! if you downvote/upvote or reply directly to the bot comment @mrge-io <feedback>, we can improve it for your team.

We take all these into consideration when improving our AI, and your direct reply will fine tune comments for your repository-only.

eqvinox 3 hours ago [-]
That's good to know, but —assuming my sample of size 1 isn't a bad outlier, I should really try a few more— there's another problem: I don't think we'd be willing to sink time into tuning a currently-free subscription service that can be yanked at any time. And I'm in a position to say it is highly unlikely that we'd pay for the service.

(We already have problems with our human review being too superficial; we've recently come to a consensus that we're letting too much technical debt slip in, in the sense of unnoticed design problems.)

Now the funny part is that I'm talking about a FOSS project with nVidia involvement ;D

But also: this being a FOSS project, people have opened AI-generated PRs. Poor AI-generated PRs. This is indirectly hurting the prospects of your product (by reputation). Might I suggest adding an AI generated PR detector, if possible? (It's not in our guidelines yet but I expect we'll be prohibiting AI generated contributions soon.)

allisonee 3 hours ago [-]
totally get where you're coming from--many big open source repos have also been using it for a while and have seen some FP but have generally felt that the quality overall was worth it. would love to continue having you try it out, but also understand that maintaining a FOSS project is a ton of work!

if you have specific feedback on the pr--feel free to email at contact@mrge.io and i'll take a look personally and see if we can adjust anything for your repo.

nice idea on the fully AI-generated PRs! something in our roadmap is to better highlight PRs or chunks that were likely auto-gened. stay tuned !

LinearEntropy 1 hours ago [-]
The call to action button says "Get Started for Free", while the pricing page lists $20/month.

Clicking the get started button immediately wants me to sign up with github.

Could you explain on the pricing page (or just to me) what the 'free' is? I'm assuming a trial of 1 month or 1 PR?

I'm somewhat hesitant to add any AI tooling to my workflows, however this is one of the use cases that makes sense to me. I'm definitely interested in trying it out, I just think its odd that this isn't explained anywhere I could find.

allisonee 8 minutes ago [-]
thanks for bringing this up! we're currently free (unlimited PRs) and will soon bill $20-$30/active user (has committed a PR) per month.

We'll try to make this clearer!

dyeje 5 hours ago [-]
I've been evaluating AI code review vendors for my org. We've trialed a couple so far. For me, taking the workflow out of GitHub is a deal breaker. I'm trying to speed things along, not upend my whole team's workflow. What's your take on that?
pomarie 5 hours ago [-]
Yeah, that's a totally legit point!

The good news with mrge is that it works just like any other AI code reviewer out there (CodeRabbit, Copilot for PRs, etc.). All AI-generated review comments sync directly back to GitHub, and interacting with the platform itself is entirely optional. In fact, several people in this thread mentioned they switched from Copilot or CodeRabbit because they found mrge's reviews more accurate.

If you prefer, you never need to leave GitHub at all.

gslepak 4 hours ago [-]
Looked at it, but as a security person, I have to recommend against it as it requires permissions to act on behalf of repository maintainers. That is asking for trouble, and represents a backdoor into every project that signs up for it.
allisonee 4 hours ago [-]
thanks for bringing this up, and totally understand the concern. we are committed to security, and we never write/access your code without your action--the only reason that setting is necessary is so that you can merge/1-click commit suggestions from the AI directly from the code suggestions it's posted.
mdaniel 7 hours ago [-]
I see on your website that you claim the subprocessors are SOC2 type 2 certified, but it doesn't appear that you claim anything about your SOC2 status (in progress, certified, not interested). I mention this because I would suspect the breach risk is not that OpenAI gets popped but rather that a place which gathers continuously updated mirrors of source code does. The sandbox idea only protects the projects from one another, not from a malicious actor injecting some bad dep into your supply chain
pomarie 7 hours ago [-]
That's a very good point. We actually just kicked off our own SOC 2 certification process last week—I hadn't updated the website yet, but I'll go ahead and do that now. Thanks for raising this!

Appreciate the feedback around security as well; protecting against supply-chain attacks is definitely top of mind for us as we build this out.

mdaniel 7 hours ago [-]
I know I'm not supposed to mention website issues here, but since you brought it up I wanted to bring to your attention that the "fade in on scroll" isn't doing you any favors for getting the information out of your head and into the heads of your audience. That observation then went to 11 when I scrolled back up and the entire page was solid black, not even showing me the things it had previously swooshed into visibility. It's your site, do what makes you happy, but I just wanted to ensure you were aware of the tradeoff you were making
pomarie 6 hours ago [-]
Hey, thanks again—really appreciate the heads-up! Could you point me to the specific section where you're seeing the fade in on scroll? Also, what browser are you using?

I don't remember adding that feature so it might be a bug

ukuina 7 hours ago [-]
How does this work for large monorepos?

If the repo is several GB, will you clone the whole thing for every review?

allisonee 7 hours ago [-]
good q! today, we'd clone the whole thing, but we're actively looking into solutions about that atm (ie: only cloning the relevant subdirs)

for custom rules, we do handle large monorepos by allowing you to add an allowlist (or exclude list) via glob patterns.

kerryritter 8 hours ago [-]
This looks like a cool solve for this problem. Some of the other tools I tried didn't seem to contextualize the app, so the comments were surface level and trite.

I'm on Bitbucket so will have to wait :)

pomarie 8 hours ago [-]
Thanks, really appreciate that! Yeah, giving the AI the ability to fetch the context it needs was a big challenge (since larger codebases can't all fit in an LLM's context window)

And totally hear you on Bitbucket—it's definitely on our roadmap. Would love to loop back with you once we get closer on that front!

timfsu 7 hours ago [-]
Happy mrge user here - congrats on the launch! It’s encouraged our team to do more stacked PRs and made every review a bit nicer
allisonee 7 hours ago [-]
thanks Tim! So glad it's been helping your team move faster
pomarie 7 hours ago [-]
Really appreciate the feedback, really happy it's helping you :)
KyleForness 6 hours ago [-]
happy user here—our team moved from coderabbit to mrge, and everyone seems to love how much more useful the AI comments are
pomarie 6 hours ago [-]
Really happy to hear mrge is useful! :) Thanks for sharing
allisonee 6 hours ago [-]
thanks for the feedback! Glad that our ai reviewer has been useful to your team!
ggarnhart 2 hours ago [-]
Heyo your launch video is unlisted on youtube. Maybe intentional, but you might benefit from having it be public :)
bilekas 6 hours ago [-]
> We know cloud-based review isn't for everyone, especially if security or compliance requires local deployments. But a cloud approach lets us run SOTA AI models without local GPU setups, and provide a consistent, single AI review per PR for an entire team.

I feel like that’s being overlooked here a bit too briefly. Is your target market not primarily larger teams who are most likely to have some security and privacy concerns?

I guess is there something on the roadmap to maybe offer something later ?

pomarie 6 hours ago [-]
Definitely—larger teams do typically have more stringent security and privacy requirements, especially if they're already using self-hosted GitHub. Self-hosted or hybrid deployment is definitely on our radar, and as we grow, it's likely we'll offer a self-hosted version specifically to support those larger teams.

If that's something your team might need, I'd love to chat more and keep you posted as we explore this!

jFriedensreich 5 hours ago [-]
Great that AI seemingly revives the stalled PR / review space. I just hope that human and local workflows will not be an afterthought or even made harder by these tools. Its also a great chance for stacked PRs and jujutsu to shake up the market.
pomarie 5 hours ago [-]
Definitely! As AIs write a lot more code, I think that the PR/review space is going to become way more important.

If you're interested in Stack PRs, you should definitely check them out on Mrge. By the way, we natively support them (in beta atm): https://docs.mrge.io/ai-review/overview

auscompgeek 6 hours ago [-]
I wanted to check this out, so I installed the GitHub app on my account, with access to all my personal repos. However when I went looking for one of my repos (auscompgeek/sphinxify) I couldn't find it. It looks like I can only see the first 100 repos in the dashboard? I have a lot of forks under my account…
pomarie 5 hours ago [-]
Quick update – we've merged a fix which should be live in ~15 mins! Thanks for reporting this :)
allisonee 6 hours ago [-]
sorry about that! we're looking into this now--if you go back to https://github.com/apps/mrge-io-dev/installations/select_tar... and just add repos you want to use us with under the "select repositories", that should unblock you until we fix it in the next hour or so.
allisonee 5 hours ago [-]
just to follow up--the fix for this is landing! thanks for surfacing
mushufasa 6 hours ago [-]
Honest initial reaction to your pitch: > Cursor for code review

Isn't cursor already the "cursor for code review?"

allisonee 6 hours ago [-]
appreciate the honest reaction! We'll think about this more, what we were trying to get at is that cursor is more about code writing, and we're tackling the review/collaboration side :) curious if anything else would have immediately stuck out to you more?
mushufasa 6 hours ago [-]
I think I got the pitch meaning immediately: this is a specialized ai tool for code review.

That said, that doesn't sound like something very useful when I already use an ai code editor for code review. And github already supports automations for ci/ci for ai tools for code review. Maybe I just don't see value in an extra tool for this.

thuanao 47 minutes ago [-]
It's been useful at our company. My only gripe is I'd like to run it locally. I don't want the feedback after I open a PR.
_insu6 7 hours ago [-]
I've tried something similar in the past. The concept is cool, but so far the solutions I've seen are not so useful in terms of comments quality and ability to catch bugs.

Hope this is the right time, as this would be a huge time-saver for me

allisonee 7 hours ago [-]
We had heard the same from a few early users, but they've commented that our AI is a more context aware/useful. Of course, that's just anecdotal. We'd love to give you a free trial (https://mrge.io/invite?=hn) and get your feedback on quality/bug catching. Feel free to reach out at contact@mrge.io if you have any questions too!
william_stokes 7 hours ago [-]
I was wondering if it has information about previous commits with deleted code? Sometimes we make a change and later realize that the previous code worked better, would mrge be able to understand that?
allisonee 7 hours ago [-]
that's a good question! today, we don't look at previous commits--but thats something that we'll consider for future roadmap. curious if this happens often to your team? and if so, how you general gauge "better" (on the prev commits)
_jayhack_ 7 hours ago [-]
If you are looking for an alternative that can also chat with you in Slack, create PRs, edit/create/search tickets and Linear, search the web and more, check out codegen.com
mw3155 6 hours ago [-]
in the demo video i see that you can apply a recommended code change with one click. how do you make sure that the code still works after the AI changes?

also, i tried some other ai review tools before. one big issue was always that they are too nice and even miss obvious bad changes. did you encounter these problems? did you mitigate this via prompting techniques or finetuning?

pomarie 6 hours ago [-]
Great questions!

For applying code changes with one-click: we keep suggestions deliberately conservative (usually obvious one-line fixes like typos) precisely to minimize risks of breaking things. Of course, you should confirm suggestions first.

Regarding AI reviewers being "too nice" and missing obvious mistakes—yes, that's a common issue and not easy to solve! We've approached it partly via prompt-tuning, and partly by equipping the AI with additional tools to better spot genuine mistakes without nitpicking unnecessarily. Lastly, we've added functionality allowing human reviewers to give immediate feedback directly to the AI—so it can continuously learn to pay attention to what's important to your team.

mw3155 6 hours ago [-]
thanks for answering! will definitly check out the tool when i have the chance. best of luck building this!
victorbjorklund 7 hours ago [-]
Would be great to have support for GitLab also (have a project there that I would love to try this on and I can't switch it to GitHub)
allisonee 7 hours ago [-]
On the roadmap! If you're happy to share your email for an early link when we do support it, send to contact@mrge.io
victorbjorklund 7 hours ago [-]
Great! Will test it on Github first.
manmal 4 hours ago [-]
Is that the four letter domain PG recently tweeted about? Congrats!
pomarie 4 hours ago [-]
It's possible! What was the tweet?
yoavz 7 hours ago [-]
Excellent product, congrats on the launch guys!
deveshanand18 7 hours ago [-]
As far as I can see, this doesn't directly integrate with github (we currently use coderabbit on github)? Is it on your timeline?
allisonee 7 hours ago [-]
good question! we currently support a direct integration with github via a github app. we'll make that clearer in the post.
Arindam1729 6 hours ago [-]
I've used CodeRabbit for Code Review. It does pretty cool work.

How different it is from that?

pomarie 6 hours ago [-]
Great question!

We've heard from users who've tried both that our AI reviewer tends to catch more meaningful issues with less noise, that's really something you should try for yourself and find out! (The great thing is that it's really easy to start using)

Beyond the AI agent itself (which is somewhat similar to CodeRabbit), our biggest differentiation comes from the human review experience we've built. Our goal was to create a Linear-like review workflow designed to help human reviewers understand and merge code faster.

mmmeff 3 hours ago [-]
Any plans to support github enterprise on different URLs? Would love to give this a try with my team.
axelb78 3 hours ago [-]
Looks awesome!
nikolayasdf123 5 hours ago [-]
why not GitHub Copilot?
pomarie 5 hours ago [-]
Great question!

We've heard from users who've tried both that our AI reviewer tends to catch more meaningful issues with less noise, that's really something you should try for yourself and find out! (The great thing is that it's really easy to start using)

Beyond the AI agent itself (which is somewhat similar to Copilot), our biggest differentiation comes from the human review experience we've built. Our goal was to create a Linear-like review workflow designed to help human reviewers understand and merge code faster.

landkittipak 6 hours ago [-]
This looks incredible!
tomasen9987 6 hours ago [-]
This looks interesting!
JofArnold 6 hours ago [-]
Congrats on the launch. Another happy user here. (Caught a really sneaky issue too!)
pomarie 6 hours ago [-]
Thanks for sharing that Jof! Glad it's helpful :)
thefourthchime 7 hours ago [-]
One personal niggle: "Code Review For The AI Era". I hate when people say era in relation to AI because it reminds me of Google's tasteless Gemini era thing.
allisonee 7 hours ago [-]
that makes total sense, thanks for the feedback! we debated this for a bit--will keep in mind for the next design pass on the site :)
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 21:56:47 GMT+0000 (Coordinated Universal Time) with Vercel.