This kind of news should be a death-knell for OpenAI.
If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
ChuckMcM 3 hours ago [-]
Alternative is that OpenAI is being quickly locked out of sources of human interactions because of competition, one way to "fix" that is build you're own meadow for data cows.
xAI isn't allowing people to use the Twitter feed to train AI
Google is keeping it's properties for Gemini
Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.
So you plant a meadow of tasty human interaction morsels to get humans to sit around and munch on them while you hook up your milking machine to their data teats and start sucking data.
baq 2 minutes ago [-]
> Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.
sama probably would like to take Satya's seat for what he no doubt sees as unblocking the path to utopia. The slight problem is he's becoming a bit lonely in that thinking.
lucianbr 54 minutes ago [-]
The assumption that you can just build a successful social network as an aside because you need access to data seems wildly optimistic. Next will be Netflix announcing working on AGI because lately show writers have been not very imaginative, and they need fresh content to keep subscribers.
Springtime 2 hours ago [-]
They also have a contract with Reddit to train on user data (a common go-to source for finding non-spam search results). Unsure how many other official agreements they have vs just scraping.
safety1st 2 hours ago [-]
Good heavens, I'd think that if anything could turn an AI model into a misanthrope, it would be this.
Springtime 1 hours ago [-]
One distinctive quality I've observed with OpenAI's models (at least with the cheapest tiers of 3,4 and o3) are their human-like face-saving when confronted with things they've answered incorrectly.
Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault, even when it's an inarguable factual error about even conceptually non-heated things like API methods.
It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it (perhaps too eagerly).
I have wondered if this is something its learned based on training from places like Reddit, or if OpenAI deliberately taught it or instructed via system prompts to seem more infallible or if models like Claude were made to deliberately reduce that aspect.
scyzoryk_xyz 37 minutes ago [-]
Mmmm nice try hooking me up.
Instead, I’m just going to hang out here in this hacker meadow and on FOSS social networks where something like that would never happen!
kromem 30 minutes ago [-]
Don't underestimate the importance of multi-user human/AI interactions.
Right now OAI's synthetic data pipeline is very heavily weighted to 1-on-1 conversations.
But models are being deployed into multi-user spaces that OAI doesn't have access to.
If you look at where their products are headed right now, this is very much the right move.
Expect it to be TikTok style media formats.
Nuzzerino 5 hours ago [-]
> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.
Death-knell? Maybe… but I wouldn’t read into it. I’d be looking more at their key employees leaving. That’s what kills companies.
smt88 3 hours ago [-]
- Product is not kickass. Hallucinations and cost limit its usefulness, and it's incinerating money. Prices are too high and need to go much higher to turn a profit.
- Their brand value is terrible. Many people loathe AI for what it's going to do for jobs, and the people who like it are just as happy to use CoPilot or Cursor or Gemini. Frontier models are mostly fungible to consumers. No one is brand-loyal to OpenAI.
- Many key employees have already left or been forced out.
robotresearcher 4 hours ago [-]
AGI is a technology or a feature, not a product. ChatGPT is a product. They need some more products to pay for one of the most expensive technologies ever (to not be delivered yet).
leptons 3 hours ago [-]
It's a shame that $1 trillion is being poured into AI so quickly, but fusion research has only seen a fraction of that over many decades.
Tenoke 16 minutes ago [-]
Much, much less was being poured into AI until it started to have some returns.
andsoitis 2 hours ago [-]
Isn’t that a reflection, to some (major?) extent, of the general sense of likelihood of breakthrough?
xmprt 2 hours ago [-]
I used to think like this but after seeing the amount of money invested into crypto companies which most average people could have quickly dismissed as irrelevant, I'm not sure VCs are a good judge of value.
caseyy 55 minutes ago [-]
It’s why we have warnings to investors saying past performance is not an indicator of future returns.
j_maffe 2 hours ago [-]
No it's the perception of likelihood of breakthrough
imafish 53 minutes ago [-]
Unfortunately VC investments rely on FOMO, and there is no current huge breakthrough in fusion research that they are afraid of missing out on.
westoncb 2 hours ago [-]
I think it might just be about distribution. Grok gets a lot of interesting opportunities for it over X, then throw in the way people reacted to new 4o image gen capabilities.
ben_w 2 hours ago [-]
OpenAI's idea of "shortly" offering AGI is "thousands" of days, 2000 days is just under 5.5 years.
saltysalt 3 hours ago [-]
Indeed! Ultimately, all online business models end at ad click revenue.
make3 3 hours ago [-]
this might just be a way to generate data
pyfon 5 hours ago [-]
It is a Threads. How is that doing?
parhamn 4 hours ago [-]
There could be too-many-cooks in the AI research part of their work.
Also, I don't think Sama thinks like a typical large org managers. OpenAI has enough money to have all sorts of products/labs that are startup like. No reason to standby waiting for the research work.
smt88 3 hours ago [-]
Altman doesn't think like a typical large org manager because he's never successfully built or run one. He failed upward into this role.
OpenAI doesn't have enough money to even run ChatGPT in perpetuity, so building internal moonshots is an irresponsible waste of investor funds.
gorgoiler 8 hours ago [-]
The analogy is with Iain Banks’ The Culture.
Anyone can be anything and do anything they want in an abundant, machine assisted world. The connections, cliques, friends and network you cultivate are more important than ever before if you want to be heard above the noise. Sheer talent has long fallen by the wayside as a differentiator.
…or alternatively it’s not The Culture at all. Is live performance the new, ahem, rock star career? In fifty years time all the lawyers and engineers and bankers will be working two jobs for minimum wage. The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
idiotsecant 5 hours ago [-]
The culture presents such a tempting world view for the type of people who populate HN.
I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I don't even think it'll be from terminators and nuclear wars and that sort of thing. I think it will come wrapped in a hyper-specific personalized emotional intelligence, tuned to find the chinks in our memetic firewalls just so. It'll sell us supplements and personalized media and politicians and we'll feel enormously emotionally satisfied the whole time.
t0lo 4 hours ago [-]
That's why it's so important to reduce all of your personal data points online. Imagine what they can reconstruct based on their modeling and comparing you to similar users. I have 60 years of involuntary data collection ahead of me. This is not going to be fun.
lucianbr 51 minutes ago [-]
A brave new world of AI, one might say.
overfeed 2 hours ago [-]
> It'll sell us supplements and personalized media and politicians and we'll feel enormously emotionally satisfied the whole time.
Which is why we'll need to acquire the drug gland technology before AGI - no mind can sell me anything if I can feel content on demand.
The Culture is about a post-capitalist utopia. You’re describing yet another cyberpunk-esque world where people have still have to do wage-labor to not starve.
gorgoiler 6 hours ago [-]
You’re right so I made a slight edit to separate my two ideas. Thanks for even reading them at all! I try to contribute positively to this site when I can, and riffing on the overlap between fiction and real-life — a la Doctorow — seems like a good way to be curious.
Nursie 2 hours ago [-]
> The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
We already have so many of those that it’s very hard to make any sort of living at it. Very hard to see a world in which more people go into that market and can earn a living as anything other than a fantasy.
Cynically - I think we'd probably end up with more influencers, people who are young, good looking and/or charismatic enough to hold the attention of other people for long enough to sell them something.
comrade1234 7 hours ago [-]
Naah… in the culture you could change your sex at will, something soon to be illegal.
Duanemclemore 7 hours ago [-]
I haven't been happier online in the last 10 years than after I stopped checking social media. And in that miserable time it wasn't even a naked beg for training data like this.
But I really don't see why anyone would even use an open ai "social network" in the first place.
It does allow one thing for open ai. Other than training data which admittedly will probably be pretty low quality. It is a natural venue for ad sales.
interludead 1 hours ago [-]
Stepping away from social media can feel like getting your brain back
Duanemclemore 7 hours ago [-]
Oh I get one thing - other than ads. So the idea of an LLM filter to algorithmically tailor your own consumption has some utility.
The logical application would be an existing social network -using- chat gpt to do this.
But all the existing ones have their own models, so if they can't plug in to an existing one like goooooogle did to yahoo in the olden days, they have to start their own.
That makes a certain amount of (backward) sense for them. I don't think it'll work. But there's some logic if you're looking from -their- worldview.
8n4vidtmkvmk 5 hours ago [-]
Isn't the selling point behind Blue sky is that you can customize your feed your way? I don't know the tech behind that but the feed is "open" isn't it? Can they plug into that?
rcpt 4 hours ago [-]
The selling point of Bluesky is that you don't get bombarded with a mob of blue checks every time you post something political
SecretDreams 7 hours ago [-]
Social media is a plague, including LinkedIn. Anything that lets you follow others and/or erodes your anonymity is just different degrees of cancer waiting to happen.
The best I ever enjoyed the internet was the sweet spot between dial up and DSL where I was gaming in text based/turn based games, talking on forums, and chatting using IRC.
Duanemclemore 6 hours ago [-]
Agreed. I wasn't particularly hooked, didn't use it very much already. As an architect, designer, and professor I had ig, and for the last five years basically only for work. But the feeling of freedom in its absence these past few months has been palpable.
Early fb reconnecting with people I hadn't seen since high school was okay. The blog / Google Reader era happening at the same time was the real golden age for me. And it's been all downhill since.
imafish 50 minutes ago [-]
Agreed. This is where HN and reddit still smells a little bit like the good old times ;)
saltysalt 3 hours ago [-]
Strongly agree. It's fascinating to me how faster broadband and selfie cameras led to more slop content.
SecretDreams 2 hours ago [-]
We can think of fast internet and phones as like supercharging progress. Except, in this case, it just accelerated how quickly humans ruin it.
bufferoverflow 5 hours ago [-]
LOL, you're on a social network right now. HN is one. Yeah, it's semi-anonymous, but there are many users with known names here.
croes 1 hours ago [-]
But I can’t follow them.
I don’t get notifications when they post new links or comments, I can’t send them specifically my links and comments.
I have no groups or circles.
HN is more of a discussion forum and not for connecting with others.
gloosx 1 hours ago [-]
Wrong, social network is centered around the concept of "you" and your "friends", where the content itself is not as important.
There is no concept of "friends" on a forum like HN, since people purely gather to discuss topics of interest here.
svara 13 minutes ago [-]
I guess where this is all going in the long run is something with an interface similar to TikTok, where the user gives rapid feedback to train an algorithm to generate content that they "love", er, that maximally tickles their reward circuitry.
beloch 14 hours ago [-]
>One idea behind the OpenAI social prototype, we’ve heard, is to have AI help people share better content. “The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”
This would be a decent PR stunt, but would such a platform offer anything of value?
It might be more valuable to set AI to the task of making the most human social platform out there. Right now, Facebook, TikTok, Reddit, etc. are all rife with bots, spam, and generative AI junk. Finding good content in this sea of noise is becoming increasingly difficult. A social media platform that uses AI to filter out spam, bots, and other AI with the goal of making human content easy to access might really catch on. Set a thief to catch thieves.
Who are we kidding. It's going to be Will Smith eating spaghetti all the way down.
TheOtherHobbes 7 hours ago [-]
An interesting use for AI right now would be using it as a gatekeeping filter, selecting social media for quality based on customisable definitions of quality.
Using it as a filter instead of a generator would provide information about which content has real social value, which content doesn't, and what the many dimensions of "value" are.
The current maximalist "Use AI to generate as much as possible" trend is the opposite of social intelligence.
falcor84 6 hours ago [-]
It's a nice idea in principle, but would probably immediately become a way by the admins to promote some views and discourage others with the excuse of some opinions being of lower quality.
_Algernon_ 3 hours ago [-]
That's what moderation is and is perfectly fine. Dang does that here on HN and for good reason.
numpad0 54 minutes ago [-]
It's not moderation, for one thing it never will be used with moderation.
petesergeant 6 hours ago [-]
I think that's right. Twitter without ads, showing you content you _do_ want to see using some embeddings magic, with decent blocking mechanisms, and not being run as a personal mouthpiece by the world's most unpopular man ... certainly not the worst idea.
dom96 7 hours ago [-]
Why would AI be any better at filtering out spam than developers have so far been with ML?
The only way to avoid spam is to actually make a social network for humans, and the only way to do so is to verify each account belongs to a single human. The only way I've found that this can be done is by using passports[0].
That's interesting. Is there a social network where you can only connect with people you meet in real life?
kikoreis 5 hours ago [-]
(Stretching a definition of social network.)
Not strictly but Debian, where member inclusion is done through an in person chain of trust process so you have clusters of people who know each other offline as a basis.
Also, most WhatsApp contacts have been exchanged IRL, I presume.
omneity 7 hours ago [-]
How do you handle binationals who might not have the same details (or even name) on each of their passports?
sampullman 6 hours ago [-]
You can always get around identification requirements, for example by purchasing a fake passport in this case. The idea is to increase the cost/friction of doing so as much as possible.
A fake ID is a lot harder to get your hands on than a new email, burner phone, etc.
dayvigo 6 hours ago [-]
So you have to just trust them to permanently delete the data after verifying you?
add-sub-mul-div 14 hours ago [-]
No, nothing of value. If you ever want to lose faith in the future of humanity search "@grok" on Twitter and look at all the interactions people have with it. Just total infantilism, people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read, arguing with it or whining to Musk if they don't get the answer they want to confirm what they already believe.
rudedogg 10 hours ago [-]
I bookmarked this example where it is confidently incorrect about a movie frame/screenshot:
the worst is like a dozen people in the replies to a post asking Grok the exact same obvious follow-up question. Somehow, having access to an LLM has completely annihilated these commenters' ability to scroll down 50 pixels.
a_bonobo 2 hours ago [-]
All decent people I know have deleted their Twitter accounts - the kind of people you now see on twitter in the mentions are... not good people.
golergka 8 hours ago [-]
> people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read
It's bad that this need exists. However, introducing this feature did not create the need. And if this need exists, fulfilling it is still better, because otherwise these kind of people wouldn't get this information at all.
_Algernon_ 3 hours ago [-]
This is worse because the AI slop is full of hallucinations which they will now confidently parrot. No way in hell does this type of person verify or even think critically about what the LLMs tell them. No information is better than bad information. Less information while practicing the ability to critically use it is better than bad information in excess.
ein0p 9 hours ago [-]
You also can get Grok to fact check bullshit by tagging @grok and asking it a question about a post. Unfortunately this is not realtime as it can sometimes take up to an hour to respond, but I've found it to be pretty level headed in its responses. I use this feature often.
paride5745 26 minutes ago [-]
It makes no sense to build a social network nowadays.
With Mastodon and Bluesky around, users have free options. Plus X and Threads, and you can see how the market is more than saturated.
IMHO they should look into close collaboration/minority stake with Bluesky or Reddit instead. You have a huge pool of users already, without the need to build it up from the ground up from scratch.
Heck, OpenAI probably has enough money to just buy Reddit if they want.
b1n 10 minutes ago [-]
Also, what is their USP? "Join our social network so we can train our models on your data!"
tiffanyh 10 hours ago [-]
My guess ... it's probably less of a "social network" and more of a "they are trying to build a destination (portal) where users go to daily".
E.g. old days of Yahoo (portal)
sho_hn 10 hours ago [-]
They just want the next wave of Ghibli meme clicks to go to them, really.
This will be built on the existing thread+share infra ChatGPT already has, and just allow profiles to cross-post into conversations, with UI and features more geared toward remixing each other's images.
herpdyderp 8 hours ago [-]
That was my thought: a meme-sharing platform.
beepbopboopp 10 hours ago [-]
The answer seems more obvious to me. They dont even care if its competitive or scales too much. xAI has a crazy data advantage firehousing Twitter, llama FB/IG and CGPT just has, well, the internet.
Id hope they have some clever scheme to acquire users, but ultimately they want the data/
latency-guy2 4 hours ago [-]
I actually would love this. I hate having to go to another website to share some thoughts I had using tools in a platform.
I miss the days when experiences would actually choose to integrate other platforms into their experiences, yes I was sort of a fan of the FB/Google share button and Twitter side feed (not the tracking bits though).
I wasn't a fan of LLM and the whole chat experience a few years ago, I'm a very mild convert now with the latest models and I'm getting some nominal benefit, so I would love to have some kind of shared chat session to brain storm, e.g. on a platform better than Figma.
The one integration of AI that I think is actually neat is Teams + AI Note taking. It's still a hit or miss a lot of the time, but it at least saves and notes something important 30% of the time.
Collaboration enhancements would be a wonderful outcome in place of AGI.
GeorgeCurtis 24 minutes ago [-]
The whole value proposition of a social media is that everyone you know (almost) is on it. That's why young people don't use Facebook.
They'd be better off buying one
gerash 3 hours ago [-]
I believe the play here is:
1. Look "Studio Ghibli" went viral, let's capitalize
2. Switching cost for LLMs are low. If we can't be the best let's find other ways to lock our users in and make our product super sticky
mrandish 3 hours ago [-]
Okay, thinking charitably here... maybe a play at getting training data they don't have to steal? (although it does seem like rotating the ladder instead of the lightbulb...)
BrenBarn 3 hours ago [-]
So Facebook is trying to get into AI (e.g. its chatbot-"user" debacle) and OpenAI wants to form its own social network. Our world is becoming the recycled shit-food of this technological ouroboros.
frabona 9 hours ago [-]
Feels like a natural next step, honestly. If they already have users generating tons of content via ChatGPT, hosting it natively and adding light social features might just be a way to keep people engaged and coming back. Not sure if it's meant to compete with Twitter/Instagram, or just quietly become another daily habit for users
pclmulqdq 7 hours ago [-]
This would be a natural step if it were 2010. In 2025, it sounds like a lack of imagination to me.
randomor 8 hours ago [-]
Controversial opinion: it's not about the generator of the content, human or not, but about the originality of the content itself. Human with the help of AI will generate more good quality as a result.
Humans are just as good as bots in generating rubbish content, if not more so.
Twitter reduced content production cost significantly, AI can take it another step down.
At minimum, a social network where people share good prompt engineering techniques will be valuable to people who are on the hunt for prompts. Just like the Midjourney website, except creating a high quality image is no longer a trip to the beach, but a thought experiment. This will also significantly cut down the cold start friction and in combination with some free credits, people may have more reasons to stay, as the current chat based business model may reach it's limit for revenue generation and retention, as it's just single player mode.
godelski 7 hours ago [-]
> but about the originality of the content itself
Your metric is too ill-defined. Here, have some highly unique content
If we need unique valid human language outputs I'll still disagree. Most human output is garbage. Good luck on your two tasks: 1) searching for high quality content 2) de-duplicating. Both are still open problems and we're pretty bad at both. De-duping images is still a tough task, before we even begin to address the problem of semantic de-duplication.
visarga 1 hours ago [-]
The idea is to let humans be humans, make a mess, debate, have their opinions, and AI comes after that and removes the herp derp from the useful parts.
As a test of concept copy paste this whole page, put it in a LLM and ask for an article. It will come out without junk, but will reflect a greater diversity of opinion, more arguments, will do debunking, and generally have better grounding in our positions than the original content.
So it's careful synthesis over human chats that is the end value. Humans provide that novelty and lived experience LLMs lack, LLMs provide consistent formatting and synthesis. The companies that understand that users are the source of entropy and novelty will stop trying to own the model and start trying to host the question.
Wondering why reddit doesn't generate thousands of articles per day from comment pages. It would crush traditional media in both diversity and quality. It would follow the interesting topics naturally.
nottorp 4 minutes ago [-]
... for bots. The "AI" bots are lonely and this will let them talk to each other.
karel-3d 2 hours ago [-]
AI generated posts and images and nonstop posting about AI? That sounds like LinkedIn in 2025.
beaugunderson 1 hours ago [-]
> While the project is still in early stages, we’re told there’s an internal prototype focused on ChatGPT’s image generation that has a social feed.
isn't it public already? they basically made tumblr but everything is AI:
Maybe before building a social network, you should be able to share the result of an answer with another user, even if they have not paid/subscribed.
Tried to share an answer to a colleague (who didn't have the paid version) and he couldn't see it ...
chazeon 10 hours ago [-]
I think a social network is not necessarily a timeline-based product, but an LLM-native/enabled group chat can probably be a very interesting product. Remember, ChatGPT itself is already a chat.
simple10 10 hours ago [-]
Yes, this. That's my bet if OpenAI follows through with social features.
Extend ChatGPT to allow multiple people / friends to interact with the bot and each other. Would be interesting UX challenge if they're able to pull it off. I frequently share chats from other platforms, but typically those platforms don't allow actual collaboration and instead clone the chat for the people I shared.
thomasfromcdnjs 8 hours ago [-]
I am building this with a team currently and we are launching in a couple days.
Would love an alpha tester or two if anyone wants to test it.
My email/twitter is in my profile, shoot me a message and I will be in touch.
sdwr 9 hours ago [-]
Yeah, the dream is the AI facilitating "organic" human connection
sho_hn 10 hours ago [-]
What's a "LLM-native/enabled group chat"?
simple10 10 hours ago [-]
Telegram and slack bots are probably the best example so far. Bot gets added to a chat and can respond when mentioned in the group chat.
sho_hn 10 hours ago [-]
Gotcha, the NLP-enabled version of the good old IRC weatherbot.
For a moment I had a funnier mental image of a chat app with an input field that treats every input as a prompt, and everyone's chatting through the veil of an LLM verbosity filter.
There might be something chat RPG-like there worth trying though ...
pluto_modadic 1 hours ago [-]
They know AI can be addictive (people will prompt it far too often), so mixing it with social media can captivate users even more effectively.
beambot 6 hours ago [-]
Makes me (further) believe that Reddit is heavily undervalued...
alphazard 5 hours ago [-]
Alright, I'll bite. What's a reasonable price for Reddit? Aren't most of their users bots?
pyfon 5 hours ago [-]
Doesn't matter. Subreddits create vast islands of value. A single sub overrun with bots is quarantined effectively.
That is why Reddit is one of my favourite social sites. It is algorithmic but if you go to r/assholedesign you get asshole design. (and an anal mod who keeps it like that) Etc.
Value $44bn ;)
blitzar 2 hours ago [-]
Discord is the real play.
mushufasa 10 hours ago [-]
Sounds like they are thinking about instagram, which originated as a phone app to apply filters to a camera and share with friends (like texting or emailing them or sending them a link to a hosted page), and evolved into a social network. Their new image generation feature has enough people organically sharing content that they probably are thinking about hosting that content on pages, then adding permissions + follow features to all of their existing users' accounts.
honestly it's not a terrible idea. it may be a distraction from their core purpose, but it's probably something they can test and learn from within a ~90 day cycle.
CharlieDigital 10 hours ago [-]
Sounds like some crossover with Civit.ai
buyucu 9 minutes ago [-]
openai is getting crushed by competitors who are offeering more cost-effective alternatives.
so sam altman is pressing all buttons to keep the hype train going.
jsnider3 15 hours ago [-]
With all the other social networks trying to keep their data private because they all want to try their own AIs, it makes sense that OpenAI would want to have its own social network that wouldn't charge them for the data. I still doubt they actually launch it.
Is this just a data play? Need more data. Start a social network. Own said data.
sva_ 10 hours ago [-]
I think its more likely that they're desperate to find a profitable business model.
000ooo000 8 hours ago [-]
Seems telling that an org had arguably the leading AI, as the planet knows it at least, and still can't exist without putting ads in front of eyes. So much for the hype.
guywithahat 5 hours ago [-]
Honestly I wonder if it’s because Altman loves X and is threatened by Grok
mr90210 10 hours ago [-]
[flagged]
blitzar 14 hours ago [-]
This is just part of the ongoing feud between Sama and Musk.
uptownfunk 6 hours ago [-]
It’s all whatever will maximize valuation. They can do it until antitrust comes for them.
numpad0 14 hours ago [-]
A 4chan but images can be prompt generated? Makes sense. Everything's going back to early 2000s, it seems.
misonic 4 hours ago [-]
this sounds not appealing to me, what extra value would it provide? why do I need a new X with AI support, making friends with Agents/Bots?
xpl 4 hours ago [-]
One interesting benefit is that OpenAI would be able to detect bots using their APIs to generate content.
Nijikokun 9 hours ago [-]
ngl building a social network isn't hard, getting people to use a social network is the hard part
janalsncm 9 hours ago [-]
An idea which sounds horrifying but would probably be pretty popular: a Facebook like feed where all of your “friends” are bots and give you instant gratification, praise, and support no matter what you post. Solves the network effect because it scales from zero.
This is Altman increasing the mass of the investment black hole that OpenAI is.
interludead 1 hours ago [-]
Curious to see if this leans more "creative community" or "algorithmic content zoo."
tossandthrow 2 hours ago [-]
The American play book: make some innovation but do a bait and switch and focus all energy on value extraction.
paulvnickerson 9 hours ago [-]
Sam Altman is retaliating against Musk for Grok and Musk's lawsuit against OpenAI, trying to ride the wave of anti-Musk political heat, and figure out a way to pull in more training data due to copyright troubles.
If they launch, expect a big splash with many claiming it is the X-killer (i.e. the same people that claimed the same of Mastadon, Threads, and Bluesky), especially around here at HN, and then nobody will talk about it anymore after a few months.
AlienRobot 9 hours ago [-]
Here's how to kill Twitter and Bluesky AND Mastodon:
1: use an LLM to extract the text from memes and relatable comics.
2: use an LLM to extract the transcriptions of videos.
3: use an LLM to censor all political speech.
OpenAI, I believe in you. You can do it. Save the Internet.
If you can clean my FYP of current events I'll join your social media before you can ask a GPT how to get more users.
numpad0 39 minutes ago [-]
Not-exactly-devil's-advocate: you're trying to sort content by quality. That's elitist. Also, those those filtered contents are worth more. You can't have only premium contents.
Someone should do it anyway and make it dominant anyway ASAP.
mcmcmc 8 hours ago [-]
> 3: use an LLM to censor all political speech.
And who gets to decide what is political? Are human rights political? Is a trans person merely existing political? Is calling for genocide political?
AlienRobot 7 hours ago [-]
The LLM decides it. That's what the AI is for.
There is a lot of stuff on the Internet, so I think the AI can just censor 80% of it and we're still going to have enough to have a social media.
terminatornet 9 hours ago [-]
computer: show me jingling keys
pfraze 8 hours ago [-]
wonder if the LLM would censor this post
AlienRobot 7 hours ago [-]
If it works the way I want it absolutely should. Mentioning politics is political content.
zombiwoof 9 hours ago [-]
Just use an LLM to verify only humans are on the social network and no bots and you win
paulvnickerson 9 hours ago [-]
^-- This comment proves my point.
outside1234 10 hours ago [-]
Aren't they unprofitable enough already?
Nevermark 5 hours ago [-]
A social network that faithfully and intelligently curated posts according to my own continuously updated (explicit) direction would be most excellent.
But it would also juice echo chamber depth and further amplify extremist "engagement".
And the monetary incentives for OpenAI to generate most of the content, the "people", and the ads, including creative hallucinations and novel extremisms, so they directly match each of our curation directions, would enshittify the whole thing within a short minute.
--
The time has come to outlaw conflict of interest businesses that scale (the conflict).
If a startup plan includes "sales" and "customers": Green light go.
If it talks about ways to "monetize": Red trash can.
Social media is becoming TikTok’s clone army, with algorithms hooked on short-form videos for max engagement.
Text, images, and long-form content are getting crushed, forcing creators into bite-sized video to be favored by almighty algorithm.
It’s like letting a kid pick their meals - nothing but sugar and candy all day.
tayo42 2 hours ago [-]
your probably not wrong
But personally i don't get it. i hate almost all videos. the exception is some thing is best shown as a video, like a how to. but I search those out
Maybe im so broken at this point, who has the patience for videos? Clearly a lot but how lol
aussieguy1234 1 hours ago [-]
So far, I've refused to watch Tiktok.
Something about mindless garbage doesn't appeal to me.
candiddevmike 10 hours ago [-]
What else are they going to spend billions on to turn a profit?
grg0 6 hours ago [-]
I don't know, but a weight bench goes under $200 and Sam needs some chest gains fast.
bhouston 15 hours ago [-]
I've always thought that the social networks like X and BlueSky are sort of like the distributed consciousness of society. It is what society, as a whole / in aggregate, is currently thinking about and knowing its ebbs and flows and what it responds to are important if you want to have up to date AI.
So yeah, AI integrated with a popular social network is valuable.
ahartmetz 15 hours ago [-]
Social networks tend to reflect the character of their founders. Do you really want to see what Sam Altman can do?
bhouston 14 hours ago [-]
> Social networks tend to reflect the character of their founders.
I would say "owners" rather than "founders", but I agree with you. I think Sam Altman's couldn't be worse than Elon Musk's X, no?
daqhris 14 hours ago [-]
Both are founders of a so-called non-profit and are suing each other. Their legal arguments are public at this point. By reading them, one may understand that it's hard to choose between 'yes' and 'no' as an answer. Maybe, we could request and take into account the opinion of what they 'created' that might outlast them and their conflict, namely AI.
ahartmetz 14 hours ago [-]
I don't use X neither. Looks like it won't be around for much longer anyway, except as American Pravda (even though "Truth" Social already exists).
newaccountlol 10 hours ago [-]
Make sure to hide your little sisters from it.
andrewstuart 13 hours ago [-]
They should use their resources to make OpenAI good at coding.
prvc 10 hours ago [-]
Is making yet another twitter clone really the way to build a path towards super-intelligence? A worthy use of the organization's talent?
blitzar 2 hours ago [-]
Another twitter clone will help the decline of human intelligence, the dumber humans are the smarter the Ai appears.
arcatech 10 hours ago [-]
Collecting millions of people’s thoughts and interactions with each other IS probably on the path to better LLMs at least.
sho_hn 10 hours ago [-]
I'd love for my agents to be created in the image of humanity's best side, its interactions on social media.
Perhaps then we can all let LLMs take care of tweeting outrage for us, and go outside to find each other rolling around on the grass.
basisword 8 hours ago [-]
I can’t think of anything less appealing or interesting. AI content as a destination has zero appeal.
Apocryphon 8 hours ago [-]
We've got gen AI now and no ZIRP yet this is all they can think of, Web 2.0 will never die.
labrador 10 hours ago [-]
It'd be cool to see Google+ resurrected with OpenAI branding. Google+ was actually a pretty well designed social network
WJW 10 hours ago [-]
Not well designed enough to live, though.
int_19h 9 hours ago [-]
It doesn't matter how well-designed it is if people aren't there. Social graph lock-in is the single biggest issue with any contender.
AlienRobot 9 hours ago [-]
Not well designed to live under Google*
Tumblr is still alive. LiveJournal is still alive. Newgrounds is still alive and Flash doesn't even exist anymore.
piva00 10 hours ago [-]
I don't believe it was well designed, it felt clunky to use, concepts weren't intuitive enough to understand after a few uses.
I tried to use it for a few months after release, always got frustrated to the point I didn't feel like reaching out to friends to be part of it.
The absurd annoyance of its marketing, pushing it into every nook and cranny of Google's products was the nail in the coffin. I'm starting to feel as annoyed by the push with Gemini, it just keeps popping up at annoying times when I want to do my work.
bluetux01 10 hours ago [-]
that would be cool, google+ was very unique and i was kinda sad google killed it off
swyx 10 hours ago [-]
what did you like about it?
labrador 10 hours ago [-]
I liked the UX. I liked Circles. There were other nice options that I can't remember but I thought Google+ was a big improvement over Facebook.
clonedhuman 9 hours ago [-]
AI bots already make up a significant percentage of users on most social networks. Might as well just take the mask off completely--soon, we'll all be having conversations (arguments, most likely) with 'users' with no real human anywhere near them.
api 9 hours ago [-]
I've been saying for a while that the next innovation beyond TikTok, Instagram, and YouTube is to get rid of human creators entirely. Just have a 100% AI-generated slop-feed tailor made for the user.
There's already a ton of AI slop on those platforms, so we're like half way there, but what I mean is eliminating the entire idea of humans submitting content. Just never-ending hypnotic slop guided by engagement maximizing algorithms.
5 hours ago [-]
13 hours ago [-]
philipov 10 hours ago [-]
Imagine that, a social network where all of the participants are bots.
shaftoe444 7 hours ago [-]
Logical conclusion of AI is to generate slop for slop feeds so why not own your own slop feed.
kittikitti 14 hours ago [-]
I would try to make a platform like Deviantart or Tumblr except OpenAI pays you to make good content that the AI is trained on.
paxys 6 hours ago [-]
You really think an OpenAI-sponsored social network is going to attract people who create and share original content?
malux85 13 hours ago [-]
Nice in theory but don’t know how practical it is to actually do.
How do you define “good”? Theres obvious examples at the extremes but a chasm of ambiguity between them.
How do you compute value? If an AI takes 200 million images to train, wait let me write that out to get a better sense of the number:
200,000,000
Then what is the value of 1 image to it? Is it worth the 3 hours of human labour time put into creating it? Is it worth 1 hour of human labour time? Even at minimum wage? No, right?
abc-1 9 hours ago [-]
Hahaha they’re cooked. GPT 4.5 was a massive flop. GPT 4.1 is barely an improvement after over a year. Now they’re grasping at straws. Anyone actually in this field who wasn’t a grifter knew improvements are sigmoidal.
All the original talent has already left too.
danity 7 hours ago [-]
Just what the world needs, another social network!
throw_m239339 6 hours ago [-]
What would be the point? Why would it even need real members?
paxys 6 hours ago [-]
Ads
siva7 11 hours ago [-]
Sam got a jawline lift, anyone noticed?
beeflet 7 hours ago [-]
Yes, I've been cataloging the mewing and lookmaxxing progress of hundreds of public figures
dlivingston 9 hours ago [-]
Did he? Flipping back and forth between old vs. new photos of him, his facial structure seems roughly the same.
tomrod 7 hours ago [-]
Maybe, but Substack is building a much mote engaging social network. I'm frankly amazed at how good it is.
rglover 7 hours ago [-]
I speculated a ways back [1] that this was why Elon Musk bought Twitter. Not to "control the discourse" but to get unfettered access to real, live human thought that you can train an AI against.
My guess is OpenAI has hit limits with "produced" content (e.g., books, blog posts, etc) and think they can fill in the gaps in the LLMs ability to "think" by leveraging raw, unpolished social data (and the social graph).
But collecting more data is just a naive task. The reason scale works is because of the way we typically scale. By collecting more data, we also tend to collect a wider variety of data and are able to also collect more good quality data. But that has serious limits. You can only do this so much before you become equivalent to the naive scaling method. You can prove this yourself fairly easily. Try to train a model on image classification and take one of your images and permute one pixel at a time. You can get a huge amount of scale out of this but your network won't increase in performance. It is actually likely to decrease.
chewbacha 7 hours ago [-]
If that were the case he (Musk) wouldn’t have turned it into a Nazi-filled red pilled echo chamber.
Rendered at 07:53:00 GMT+0000 (Coordinated Universal Time) with Vercel.
If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.
xAI isn't allowing people to use the Twitter feed to train AI
Google is keeping it's properties for Gemini
Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.
So you plant a meadow of tasty human interaction morsels to get humans to sit around and munch on them while you hook up your milking machine to their data teats and start sucking data.
sama probably would like to take Satya's seat for what he no doubt sees as unblocking the path to utopia. The slight problem is he's becoming a bit lonely in that thinking.
Rather than directly admit fault they'll regularly respond in subtle (moreso o3) to not so subtle roundabout ways that deflect blame rather than admit direct fault, even when it's an inarguable factual error about even conceptually non-heated things like API methods.
It's an annoying behavior of their models and in complete contrast to say Anthropic's Claude which ime will immediately and directly admit to things it had responded incorrectly about when the user mentions it (perhaps too eagerly).
I have wondered if this is something its learned based on training from places like Reddit, or if OpenAI deliberately taught it or instructed via system prompts to seem more infallible or if models like Claude were made to deliberately reduce that aspect.
Instead, I’m just going to hang out here in this hacker meadow and on FOSS social networks where something like that would never happen!
Right now OAI's synthetic data pipeline is very heavily weighted to 1-on-1 conversations.
But models are being deployed into multi-user spaces that OAI doesn't have access to.
If you look at where their products are headed right now, this is very much the right move.
Expect it to be TikTok style media formats.
I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.
Death-knell? Maybe… but I wouldn’t read into it. I’d be looking more at their key employees leaving. That’s what kills companies.
- Their brand value is terrible. Many people loathe AI for what it's going to do for jobs, and the people who like it are just as happy to use CoPilot or Cursor or Gemini. Frontier models are mostly fungible to consumers. No one is brand-loyal to OpenAI.
- Many key employees have already left or been forced out.
Also, I don't think Sama thinks like a typical large org managers. OpenAI has enough money to have all sorts of products/labs that are startup like. No reason to standby waiting for the research work.
OpenAI doesn't have enough money to even run ChatGPT in perpetuity, so building internal moonshots is an irresponsible waste of investor funds.
Anyone can be anything and do anything they want in an abundant, machine assisted world. The connections, cliques, friends and network you cultivate are more important than ever before if you want to be heard above the noise. Sheer talent has long fallen by the wayside as a differentiator.
…or alternatively it’s not The Culture at all. Is live performance the new, ahem, rock star career? In fifty years time all the lawyers and engineers and bankers will be working two jobs for minimum wage. The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I don't even think it'll be from terminators and nuclear wars and that sort of thing. I think it will come wrapped in a hyper-specific personalized emotional intelligence, tuned to find the chinks in our memetic firewalls just so. It'll sell us supplements and personalized media and politicians and we'll feel enormously emotionally satisfied the whole time.
Which is why we'll need to acquire the drug gland technology before AGI - no mind can sell me anything if I can feel content on demand.
"Amused to Death"
Great title and an even better album. https://en.wikipedia.org/wiki/Amused_to_Death
We already have so many of those that it’s very hard to make any sort of living at it. Very hard to see a world in which more people go into that market and can earn a living as anything other than a fantasy.
Cynically - I think we'd probably end up with more influencers, people who are young, good looking and/or charismatic enough to hold the attention of other people for long enough to sell them something.
But I really don't see why anyone would even use an open ai "social network" in the first place.
It does allow one thing for open ai. Other than training data which admittedly will probably be pretty low quality. It is a natural venue for ad sales.
The logical application would be an existing social network -using- chat gpt to do this.
But all the existing ones have their own models, so if they can't plug in to an existing one like goooooogle did to yahoo in the olden days, they have to start their own.
That makes a certain amount of (backward) sense for them. I don't think it'll work. But there's some logic if you're looking from -their- worldview.
The best I ever enjoyed the internet was the sweet spot between dial up and DSL where I was gaming in text based/turn based games, talking on forums, and chatting using IRC.
Early fb reconnecting with people I hadn't seen since high school was okay. The blog / Google Reader era happening at the same time was the real golden age for me. And it's been all downhill since.
HN is more of a discussion forum and not for connecting with others.
There is no concept of "friends" on a forum like HN, since people purely gather to discuss topics of interest here.
This would be a decent PR stunt, but would such a platform offer anything of value?
It might be more valuable to set AI to the task of making the most human social platform out there. Right now, Facebook, TikTok, Reddit, etc. are all rife with bots, spam, and generative AI junk. Finding good content in this sea of noise is becoming increasingly difficult. A social media platform that uses AI to filter out spam, bots, and other AI with the goal of making human content easy to access might really catch on. Set a thief to catch thieves.
Who are we kidding. It's going to be Will Smith eating spaghetti all the way down.
Using it as a filter instead of a generator would provide information about which content has real social value, which content doesn't, and what the many dimensions of "value" are.
The current maximalist "Use AI to generate as much as possible" trend is the opposite of social intelligence.
The only way to avoid spam is to actually make a social network for humans, and the only way to do so is to verify each account belongs to a single human. The only way I've found that this can be done is by using passports[0].
0 - https://onlyhumanhub.com
Not strictly but Debian, where member inclusion is done through an in person chain of trust process so you have clusters of people who know each other offline as a basis.
Also, most WhatsApp contacts have been exchanged IRL, I presume.
A fake ID is a lot harder to get your hands on than a new email, burner phone, etc.
https://x.com/Pee159604/status/1909445730697462080
It's bad that this need exists. However, introducing this feature did not create the need. And if this need exists, fulfilling it is still better, because otherwise these kind of people wouldn't get this information at all.
With Mastodon and Bluesky around, users have free options. Plus X and Threads, and you can see how the market is more than saturated.
IMHO they should look into close collaboration/minority stake with Bluesky or Reddit instead. You have a huge pool of users already, without the need to build it up from the ground up from scratch.
Heck, OpenAI probably has enough money to just buy Reddit if they want.
E.g. old days of Yahoo (portal)
This will be built on the existing thread+share infra ChatGPT already has, and just allow profiles to cross-post into conversations, with UI and features more geared toward remixing each other's images.
Id hope they have some clever scheme to acquire users, but ultimately they want the data/
I miss the days when experiences would actually choose to integrate other platforms into their experiences, yes I was sort of a fan of the FB/Google share button and Twitter side feed (not the tracking bits though).
I wasn't a fan of LLM and the whole chat experience a few years ago, I'm a very mild convert now with the latest models and I'm getting some nominal benefit, so I would love to have some kind of shared chat session to brain storm, e.g. on a platform better than Figma.
The one integration of AI that I think is actually neat is Teams + AI Note taking. It's still a hit or miss a lot of the time, but it at least saves and notes something important 30% of the time.
Collaboration enhancements would be a wonderful outcome in place of AGI.
1. Look "Studio Ghibli" went viral, let's capitalize
2. Switching cost for LLMs are low. If we can't be the best let's find other ways to lock our users in and make our product super sticky
Humans are just as good as bots in generating rubbish content, if not more so.
Twitter reduced content production cost significantly, AI can take it another step down.
At minimum, a social network where people share good prompt engineering techniques will be valuable to people who are on the hunt for prompts. Just like the Midjourney website, except creating a high quality image is no longer a trip to the beach, but a thought experiment. This will also significantly cut down the cold start friction and in combination with some free credits, people may have more reasons to stay, as the current chat based business model may reach it's limit for revenue generation and retention, as it's just single player mode.
As a test of concept copy paste this whole page, put it in a LLM and ask for an article. It will come out without junk, but will reflect a greater diversity of opinion, more arguments, will do debunking, and generally have better grounding in our positions than the original content.
So it's careful synthesis over human chats that is the end value. Humans provide that novelty and lived experience LLMs lack, LLMs provide consistent formatting and synthesis. The companies that understand that users are the source of entropy and novelty will stop trying to own the model and start trying to host the question.
Wondering why reddit doesn't generate thousands of articles per day from comment pages. It would crush traditional media in both diversity and quality. It would follow the interesting topics naturally.
isn't it public already? they basically made tumblr but everything is AI:
https://sora.com/explore
Tried to share an answer to a colleague (who didn't have the paid version) and he couldn't see it ...
Extend ChatGPT to allow multiple people / friends to interact with the bot and each other. Would be interesting UX challenge if they're able to pull it off. I frequently share chats from other platforms, but typically those platforms don't allow actual collaboration and instead clone the chat for the people I shared.
Would love an alpha tester or two if anyone wants to test it.
My email/twitter is in my profile, shoot me a message and I will be in touch.
For a moment I had a funnier mental image of a chat app with an input field that treats every input as a prompt, and everyone's chatting through the veil of an LLM verbosity filter.
There might be something chat RPG-like there worth trying though ...
That is why Reddit is one of my favourite social sites. It is algorithmic but if you go to r/assholedesign you get asshole design. (and an anal mod who keeps it like that) Etc.
Value $44bn ;)
honestly it's not a terrible idea. it may be a distraction from their core purpose, but it's probably something they can test and learn from within a ~90 day cycle.
so sam altman is pressing all buttons to keep the hype train going.
If they launch, expect a big splash with many claiming it is the X-killer (i.e. the same people that claimed the same of Mastadon, Threads, and Bluesky), especially around here at HN, and then nobody will talk about it anymore after a few months.
1: use an LLM to extract the text from memes and relatable comics.
2: use an LLM to extract the transcriptions of videos.
3: use an LLM to censor all political speech.
OpenAI, I believe in you. You can do it. Save the Internet.
If you can clean my FYP of current events I'll join your social media before you can ask a GPT how to get more users.
Someone should do it anyway and make it dominant anyway ASAP.
And who gets to decide what is political? Are human rights political? Is a trans person merely existing political? Is calling for genocide political?
There is a lot of stuff on the Internet, so I think the AI can just censor 80% of it and we're still going to have enough to have a social media.
But it would also juice echo chamber depth and further amplify extremist "engagement".
And the monetary incentives for OpenAI to generate most of the content, the "people", and the ads, including creative hallucinations and novel extremisms, so they directly match each of our curation directions, would enshittify the whole thing within a short minute.
--
The time has come to outlaw conflict of interest businesses that scale (the conflict).
If a startup plan includes "sales" and "customers": Green light go.
If it talks about ways to "monetize": Red trash can.
If only.
That would be an interesting evolution.
Text, images, and long-form content are getting crushed, forcing creators into bite-sized video to be favored by almighty algorithm.
It’s like letting a kid pick their meals - nothing but sugar and candy all day.
But personally i don't get it. i hate almost all videos. the exception is some thing is best shown as a video, like a how to. but I search those out
Maybe im so broken at this point, who has the patience for videos? Clearly a lot but how lol
Something about mindless garbage doesn't appeal to me.
So yeah, AI integrated with a popular social network is valuable.
I would say "owners" rather than "founders", but I agree with you. I think Sam Altman's couldn't be worse than Elon Musk's X, no?
Perhaps then we can all let LLMs take care of tweeting outrage for us, and go outside to find each other rolling around on the grass.
Tumblr is still alive. LiveJournal is still alive. Newgrounds is still alive and Flash doesn't even exist anymore.
I tried to use it for a few months after release, always got frustrated to the point I didn't feel like reaching out to friends to be part of it.
The absurd annoyance of its marketing, pushing it into every nook and cranny of Google's products was the nail in the coffin. I'm starting to feel as annoyed by the push with Gemini, it just keeps popping up at annoying times when I want to do my work.
There's already a ton of AI slop on those platforms, so we're like half way there, but what I mean is eliminating the entire idea of humans submitting content. Just never-ending hypnotic slop guided by engagement maximizing algorithms.
How do you define “good”? Theres obvious examples at the extremes but a chasm of ambiguity between them.
How do you compute value? If an AI takes 200 million images to train, wait let me write that out to get a better sense of the number:
200,000,000
Then what is the value of 1 image to it? Is it worth the 3 hours of human labour time put into creating it? Is it worth 1 hour of human labour time? Even at minimum wage? No, right?
All the original talent has already left too.
My guess is OpenAI has hit limits with "produced" content (e.g., books, blog posts, etc) and think they can fill in the gaps in the LLMs ability to "think" by leveraging raw, unpolished social data (and the social graph).
[1] https://news.ycombinator.com/item?id=31397703