- cross-posted to:
- hackernews
- cross-posted to:
- hackernews
yes its from reddit, but its fairly interesting.
https://old.reddit.com/r/programming/comments/1hwj0sq/fired_from_meta_after_1_week_prolog_engineer/
yes its from reddit, but its fairly interesting.
https://old.reddit.com/r/programming/comments/1hwj0sq/fired_from_meta_after_1_week_prolog_engineer/
Can this be explained by the fact that the Internet already has satisfactory answers to more and more questions, so there is less need to actively ask new ones?
I would imagine it’s more ChatGPT getting about as good as stackoverflow.
Couple that with the ability to ask a question without someone closing it as off topic, or a duplicate, or telling you you don’t actually need to do the thing you need to do, or bringing up the XY problem, or… well the list goes on.
ChatGPT might hallucinate sometimes but it’s nice about it and it fundamentally changes the barrier to entry to ask a question in the same way that stackoverflow once did.
Stackoverflow was a step change because it excelled at being a a great place to ask questions because they gamified people actually answering them.
ChatGPT is another step change because it makes it so you can get a similar quality answer instantly and without any of the social baggage. It also allows you to have follow-ups and get into a groove of question and answer. It’s not always right but I was pleasantly surprised using it to navigate unfamiliar libraries and apis and being able to drill down on something. Even when it got something wrong it got it right enough that I could course correct without having to argue back and forth with someone.
Bleh, maybe I’m an old man, but when I’m searching stackoverflow, I find the context of stack overflow answers really helpful.
I.E. the top result may include caveats itself or have comments indicating why an answer might be problematic. And sometimes the best answer isn’t even the top answer. I’ve not used AI code assistance very much, but these all seem like things that the model is likely to take for granted.
But I also never contribute to stackoverflow, and agree I’d much rather engage with with an AI than do THAT.
I see it pretty often saying “you could do it this way, or XYZ other way”
Yeah but the benefit of professionals and actual users speaking up is that they can speak to more than the immediate need.
ChatGPT is always ready to provide answers, but I’ve rarely see it be aware enough to offer advice when unprompted.
Basically if you’re so wrong you don’t even know your wrong, ChatGPT isn’t likely to help, but people online can and at least did.
There has been a large exit the last few months too
Can you please tell me why? I read about it and made a lot of noise but I forgot what it was about.
If i recall A lot of power user even threaten to delete their post as strike?
I think the big one recently is the opt in ai training. A lot of people were not happy with their data being hoovered up and making so a ton of $$. So they are replacing their answers with nonsense/deleting them. Plus there’s now ai bots that are on so…such a strange world.
SO is only useful if it’s filled with things that help out users. If it starts getting less foot traffic, an evaporation effect occurs where more and more uses leave thereby making it even less useful.
Thanks :) Now I remember ! It’s always AI…
Dunno how good their strike went and/or/if their actions had any effect on stack* and derivatives…
I just hope there will be some open source replacement maybe with federation? Dunno, I’m not a contributor but stack* most of the time solved my questions I had and I will probably miss the human interaction :/.
This, my altruism has it’s limits
I used to land there a lot on my searches, but ChatGPT gives higher quality results, unbelievably. Kagi+AI goes far.