The California Supreme Court will not prevent Democrats from moving forward Thursday with a plan to redraw congressional districts.

Republicans in the Golden State had asked the state’s high court to step in and temporarily block the redistricting efforts, arguing that Democrats — who are racing to put the plan on the ballot later this year — had skirted a rule requiring state lawmakers to wait at least 30 days before passing newly introduced legislation.

But in a ruling late Wednesday, the court declined to act, writing that the Republican state lawmakers who filed the suit had “failed to meet their burden of establishing a basis for relief at this time.”

  • wildncrazyguy138@fedia.io
    link
    fedilink
    arrow-up
    4
    arrow-down
    26
    ·
    2 days ago

    I was skeptical of your assertion, so I peppered Copilot with a few prompts and it seems to confirm your point.

    —-

    States with the Greatest Untapped Gerrymandering Potential

    Below are the key one-party trifecta states whose current congressional maps rate as relatively fair (Princeton A or B). These jurisdictions have the structural guardrails of independent or bipartisan commissions in place—but if those were overridden or relaxed, the controlling party could pick up a small handful of extra seats.


    1. A-Grade Maps under Unitary Control

    State Controlling Party 2021 Map Grade Current House Seats Estimated Additional Seats Source Arizona Republican A 9 +1 A Colorado Democratic A 8 +1 A Washington Democratic A 10 +1 A

    Arizona’s independent commission maps gave Republicans a near-proportional 5–4 split on a 50-50 statewide vote; stripping or subverting that commission could flip one more GOP seat. Colorado and Washington delivered Democrats fair shares of 4–4 and 8–2 respectively; each could see one extra Democratic district if guardrails were weakened.


    1. B-Grade Maps under Unitary Control

    State Controlling Party 2021 Map Grade Current House Seats Estimated Additional Seats Source California Democratic B 52 +5 B New York Democratic B 26 +2–3 C

    California Democrats are already eyeing mid-cycle tweaks that would boost their delegation from 82.7% of seats to over 92.3%, a net gain of about five seats relative to a 58.5% vote share. New York’s Democrats hold 25 of 26 seats with roughly 58% of the vote; abandoning the independent commission could net them an additional two or three safe districts.


    Each of these states demonstrates that even jurisdictions with top-graded, commission-drawn maps can swing several seats if the party in power decides to scrap or weaken those commissions. Turning a single “fair” seat-voter curve into a heavily tilted map typically yields roughly one extra seat per ten districts—a small change with an outsized impact in a razor-thin U.S. House majority.

    • jordanlund@lemmy.world
      shield
      M
      link
      fedilink
      arrow-up
      13
      ·
      2 days ago

      Reported as “AI Slop Post”

      but a) we don’t have a rule against that.

      and 2) OP clearly noted the used Co-Pilot to generate it, they aren’t trying to pass it off as their own.

      I’m actually OK with this. Obviously we’ll remove AI generated ARTICLES that get posted, same as we’d remove videos and such, but in a comment? Clearly noted as AI? I think I’m OK with that.

      If y’all WANT a rule about it, hit me up. I’ll bring it up with the other mods and admins.

      • ToastedPlanet@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        I’ve got three arguments for you on why you should make a rule against LLM comments, even those publicly marked as AI. And I’m going to refer to AI as LLM because large language models are what we are dealing with here.

        First, LLMs aren’t a reliable source of information, especially for recent events. They regurgitate training data based on weights calibrated during training. These weights are used to create results that, especially for numbers, can look accurate for the topic but still be the wrong number. For recent events, they will lack the relevant data because it won’t have been in the data set they were trained on. So until that data is added in, the LLMs are giving an answer to something they don’t know, for lack of a better phrasing. These are commonly known limitations of the LLMs we are discussing.

        If people start using LLMs to argue then the comments sections are going to be filled with pages of made up LLM garbage. LLMs will generate more misinformation than anyone can keep up with to debunk. Especially when misinformation could do the most damage like in the weeks leading up to the special election this November 4th in California.

        I find it unlikely that all of the statistics listed, without sources, by the LLM are accurate. But regardless of that, if a user was to respond by taking that comment and putting it in a LLM it’s not likely that the LLM would be able to keep those numbers consistent. These errors would compound the longer the discussion went on between two LLMs.

        At best this all wastes peoples’ time and lemmy becomes an extension of the misinformation LLM machine. At worst this becomes an attack vector for bad actors. Bad actors fill up comment sections with LLM discussions that promote one view point and bury the rest. Knowing the comments are LLM generated doesn’t solve these problems on its own.

        Second, we shouldn’t want to automate thinking. Tools are supposed to save time while retaining agency. My laptop saves me the time of having to send you a letter in the mail and having to wait for the response. My laptop doesn’t deny me agency when it does this. I get to decide what I value and how that is communicated to you. The LLM saved OP’s time, if all OP wanted was text that looks correct at a glance, but it removed OP’s agency to think.

        Facts and data, purportedly accurate, are assembled into a structure to deliver a central point, but none of that is done with the agency of OP. It’s not the OP’s thoughts or values being delivered to any of us. It’s not even a position held for the sake of a debate. This is the LLM regurgitating the position it received in the prompt in the affirmative, because that’s what the LLMs we have access to do. Like shouting into a cave and getting the echo back out.

        We aren’t getting what we want faster with LLM content, we are being denied it. The LLM takes away our ability to have a discussion with each other. Anyone using an LLM to think for them is by definition not participating in the discussion. No one can have a conversation, argument, or debate with this OP because despite OP having commented OP didn’t write it. For lack of a better analogy, I might as well have a discussion with a parrot.

        What are we doing on this website if we are all going to roll out our LLMs and have them talk to each other for us? We can all open two windows, position them side by side, and copy and paste prompts back and forth without needing a decentralized social media website as the middle man. The goal of social media and lemmy is to talk to other people.

        Third, do you really want to volunteer to moderate LLM content? ChatGPT prose gets repetitive and it can never come up with anything new. I would not want to be stuck reading that all day.

        • jordanlund@lemmy.world
          shield
          M
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 hours ago

          I can definitely see the argument, OTOH, if someone actually owns up to it and says something on the order of “I dunno, so I asked Chat GPT and it says…”

          I think the admission/disclosure model is fine, AND it actually opens up discussion for “OK, here’s why Chat GPT is wrong…” which is a healthy discussion to have.

          But I can definitely bring it up with the group and see what people think!

          • ToastedPlanet@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 hour ago

            The issue is the scale. One comment can be fact checked in under an hour. Thousands not so much.

            Also, it’s not purely about accuracy. I want to be having discussions with other humans. Not software.

            Thanks for bringing this up to the group, I appreciate it! edit: typo

            • jordanlund@lemmy.worldM
              link
              fedilink
              arrow-up
              1
              ·
              1 hour ago

              Scale is always a problem, and if someone is using it to spam, we’d ban it for spam.

              I see a LOT of generative spam posts, those get removed with a quickness, but it’s because of the spam, not because it’s generated.

              Discussion is open now, so far it’s leaning on “hey as long as they disclose it…” which still leaves it open to remove undisclosed generated comments.

              But then you have the trap of “Well, how do you prove it if they don’t disclose it?” 🤔 There really is no LLM detector yet.

              • ToastedPlanet@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                32 minutes ago

                Bots could be used to spam LLM comments, but users can effectively act as a manual bot with a LLM assisting them.

                There really is no LLM detector yet.

                Unless the prompter goes out of their way to obfuscate the text manually, which sort of defeats the purpose, they tend to be very samey. The generated text would stand out if multiple users were using the same or even similar prompts. And OPs stands out even without the admission.

                edit: to clarify I mean stand out to the human eye, human mods would have to be the ones removing the comments

    • sepi@piefed.social
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      2 days ago

      Peak I can't think by myself so let's see what brainrot the "AI" gives me kinda deal. So cooked I can smell it’s well-done all the way over here.

    • trevor (he/they)@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      2 days ago

      If you couldn’t be bothered to think or write for yourself, why would you think anyone would be bothered to read that?? It’s literally just pollution.

      • wildncrazyguy138@fedia.io
        link
        fedilink
        arrow-up
        3
        arrow-down
        6
        ·
        2 days ago

        Now I know how liberal gun owners feel. Very rarely do I not agree with the left platform, but y’all opting to dismiss one of the most powerful tools ever given to mankind is going to be at your peril.

        It has its faults just like humans do, but it is literally the culmination of all human knowledge. It’s Wikipedia for nearly everything at your fingertips.

        Perhaps the way y’all use it is wrong. It’s not meant to make the decisions for you, it’s a tool to get you 80% there quickly then you do the last mile of work.

        Anywho, the premise stands. Democrats have more leverage to use gerrymandering if they do chose it, though I wish we weren’t in a place where they had to go with a nuclear option that threatens US democracy even more.

        • trevor (he/they)@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          People just don’t like reading slop from lying machines. It’s really just that simple.

          Polluting a chat thread with slop is just a rude thing to do. Nobody like sloppers.

        • techt@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          2 days ago

          The issue is you didn’t confirm anything the text prediction machine told you before posting it as a confirmation of someone else’s point, and then slid into a victimized, self-righteous position when pushed back upon. One of the worst things about how we treat LLMs is comparing their output to humans – they are not, figuratively or literally, the culmination of all human knowledge, and the only fault they have comparable to humans is a lack of checking the validity of its answers. In order to use an LLM responsibly, you have to already know the answer to what you’re requesting a response to and be able to fact-check it. If you don’t do that, then the way you use it is wrong. It’s good for programming where correctness is a small set of rules, or discovering patterns where we are limited, but don’t treat it like a source of knowledge when it constantly crosses its wires.

            • techt@lemmy.world
              link
              fedilink
              arrow-up
              5
              ·
              2 days ago

              You have yet to suggest or confirm otherwise, so my point stands that your original post is unhelpful and non-contributive

              • melsaskca@lemmy.ca
                link
                fedilink
                arrow-up
                3
                ·
                24 hours ago

                I read the post and it was not unhelpful. My concern is that we are starting to use the magic 8-ball too much. Pretty soon we won’t be able to distinguish good information from bad, regardless of the source.

                • techt@lemmy.world
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  19 hours ago

                  Yeah I feel you. I don’t think the content is necessarily bad, but LLM output posing as a factual post at a bare, bare minimum needs to also include the sources that the bot used to synthesize its response. And, ideally, a statement from the poster that they checked and verified against all of them. As it is now, no one except the author has any means of checking any of that; it could be entirely made up, and very likely is misleading. All I can say is it sounds good, I guess, but a vastly more helpful response would have been a simple link to a reputable source article.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      2 days ago

      I mean, I would appreciate not just reposting AI output, but I appreciate the support.

      it also extends just as a conclusion from the algorithm for gerrymandering. It’s founded in the math used in packing and cracking when you have limited numbers of districts. In republican Gerrymandering you are necessary making red districts closer to a toss up and blue districts safe. If you push it too far and in a wave election it has the potential to fail catastrophically.

      The easiest way to find states at the greatest risk for this is to identify states where the presidential margin was close, but almost all the reps are red or blue.