Wikipedia's founder said he used ChatGPT in the review process for an article and thought it could be helpful. Editors replied to point out it was full of mistakes.
How do you use Sybil attack for a system where the initial creator signs the initial voters, and then they collectively sign elections and acceptance of new members and all such stuff?
Doesn’t seem to be a problem for a system with authorized voters.
So why would they accept said AI-generated applicants?
If we are making a global system, then confirmation using some nation’s ID can be done, with removing fakes found out later. Like with IRL nation states. Or “bring a friend and be responsible if they are a fake”. Or both at the same time.
How do you prevent sybil attacks without making it overly expensive to vote?
How do you use Sybil attack for a system where the initial creator signs the initial voters, and then they collectively sign elections and acceptance of new members and all such stuff?
Doesn’t seem to be a problem for a system with authorized voters.
Flood them with AI-generated applicants.
So why would they accept said AI-generated applicants?
If we are making a global system, then confirmation using some nation’s ID can be done, with removing fakes found out later. Like with IRL nation states. Or “bring a friend and be responsible if they are a fake”. Or both at the same time.
Would every participant get to see my government-issued ID?
One can elect a small group which will and will sign its connection to something intermediate. Then only they will.
How do we know if they’re doing a good job without being able to review their work?