• 0 Posts
  • 514 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • It’s not saying the states are acting capriciously or even unreasonably, it’s just that the system would treat it as such

    The system would declare the proper remediation is the states suing for their funds and having the justice system fix it. If the justice system so orders the dispersement and federal gov refuses to pay out, then I could imagine the settlement terms permitting the state to deduct owed funds from their payments. If the justice system fails to rule appropriately, then the state doesn’t have legal recourse, but it may still make sense to take their recourse anyway.


  • At least in my car, the lane following (not keeping system) is handy because the steering wheel naturally tends to go where it should and less often am I “fighting” the tendency to center. The keeping system is at least for me largely nothing. If I turn signal, it ignores me crossing a lane. If circumstances demand an evasive maneuver that crosses a line, it’s resistance isn’t enough to cause an issue. At least mine has fared surprisingly well in areas where the lane markings are all kind of jacked up due to temporary changes for construction. If it is off, then my arms are just having to generally assert more effort to be in the same place I was going to be with the system. Generally no passenger notices when the system engages/disengages in the car except for the chiming it does when it switches over to unaided operation.

    So at least my experience has been a positive one, but it hits things just right with intervention versus human attention, including monitoring gaze to make sure I am looking where I should. However there are people who test “how long can I keep my hands off the steering wheel”, which is a more dangerous mode of thinking.

    And yes, having cameras everywhere makes fine maneuvering so much nicer, even with the limited visualization possible in the synthesized ‘overhead’ view of your car.


  • To the extent it is people trying to fool people, it’s rich people looking to fool poorer people for the most part.

    To the extent it’s actually useful, it’s to replace certain systems.

    Think of the humble phone tree, designed to make it so humans aren’t having to respond, triage, and route calls. So you can have an AI system that can significantly shorten that role, instead of navigating a tedious long maze of options, a couple of sentences back and forth and you either get the portion of automated information that would suffice or routed to a human to take care of it. Same analogy for a lot of online interactions where you have to input way too much and if automated data, you get a wall of text of which you’d like something to distill the relevant 3 or 4 sentences according to your query.

    So there are useful interactions.

    However it’s also true that it’s dangerous because the “make user approve of the interaction” can bring out the worst in people when they feel like something is just always agreeing with them. Social media has been bad enough, but chatbots that by design want to please the enduser and look almost legitimate really can inflame the worst in our minds.


  • The thing about self driving is that it has been like 90-95% of the way there for a long time now. It made dramatic progress then plateaued, as approaches have failed to close the gap, with exponentially more and more input thrown at it for less and less incremental subjective improvement.

    But your point is accurate, that humans have lapses and AI have lapses. The nature of those lapses is largely disjoint, so that makes an opportunity for AI systems to augment a human driver to get the best of both worlds. A constantly consistently vigilant computer driving monitoring and tending the steering, acceleration, and braking to be the ‘right’ thing in a neutral behavior, with the human looking for more anomolous situations that the AI tends to get confounded about, and making the calls on navigating certain intersections that the AI FSD still can’t figure out. At least for me the worst part of driving is the long haul monotony on freeway where nothing happens, and AI excels at not caring about how monotonous it is and just handling it, so I can pay a bit more attention to what other things on the freeway are doing that might cause me problems.

    I don’t have a Tesla, but have a competitor system and have found it useful, though not trustworthy. It’s enough to greatly reduce the drain of driving, but I have to be always looking around, and have to assert control if there’s a traffic jam coming up (it might stop in time, but it certainly doesn’t slow down soon enough) or if I have to do a lane change in some traffic (if traffic conditions are light, it can change langes nicely, but without a whole lot of breathing room, it won’t do it, which is nice when I can afford to be stupidly cautious).


  • I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it’s at least consistently paying attention, and literally has eyes in the back of it’s head.

    However, there’s so much data about how it fails in stupidly obvious ways that it shouldn’t, so you still need the human attention to cover the more anomalous scenarios that foul self driving.


  • Now there’s models that reason,

    Well, no, that’s mostly a marketing term applied to expending more tokens on generating intermediate text. It’s basically writing a fanfic of what thinking on a problem would look like. If you look at the “reasoning” steps, you’ll see artifacts where it just goes disjoint in the generated output that is structurally sound, but is not logically connected to the bits around it.


  • The probabilities of our sentence structure are a consequence of our speech, we aren’t just trying to statistically match appropriate sounding words.

    With enough use of LLM, you will see how it is obviously not doing anything like conceptualizing the tokens it’s working with or “reasoning” even when it is marketed as “reasoning”.

    Sticking to textual content generation by LLM, you’ll see that what is emitted is first and foremost structurally appropriate, but beyond that it’s mostly “bonus” for it to be narratively consistent and an extra bonus if it also manages to be factually consistent. An example I saw from Gemini recently had it emit what sounded like an explanation of which action to pick, and then the sentence describing actually picking the action was exactly opposite of the explanation. Both of those were structurally sound and reasonable language, but there’s no logical connection between the two portions of the emitted output in that case.


  • So I’ve seen a fair number of people claim that Trump outperforming downballot is a sign of some cheating.

    Question I have in response is why would the GOP bother to only cheat in the presidential election? Seems like if they were going to go all in on cheating, they’d rig it more soundly in their favor. Maybe they would have let Robinson take a dive because of his particular circumstances, but they would have assured themselves at least a supermajority in the state legislature to compensate.

    It seems a simpler explanation is that people didn’t vote Republican, they voted “Trump”. Or also plausible that some of them didn’t vote “Trump” so much as they voted “not the woman of color with a foreign sounding name”.





  • I’ll go a bit further and say this particular hill is not the best one to choose, as presidents have long unilaterally launched military operations and it’s been broadly declared legal, even if it makes no sense. Changing the law would be good, but as the law stands, it’s a hard argument to make that Trump should be impeached because of his unilateral decision to strike Iran but every other president in recent history shouldn’t have been impeached over their unilateral strikes.

    Need to select some way in which he has behaved illegally, in a way that looks corrupt, and in a way that is different than other presidents that have been given free passes. He seems to give such circumstances pretty routinely, so I don’t know why you’d go for this one.



  • improved how we recognize and diagnose it.

    Well, we at least have changed how we recognize and diagnose it, I’m not totally convinced it’s 100% an “improvement”. We’ve kind of jumbled up a whole bunch of people under a common umbrella and diluted the implications of the term, to the point where it tells you negligible practical information when someone is described as “autistic” or “on the spectrum”.


  • Keep in mind this is a system with millions of miles under it’s belt and it still doesn’t understand what to do with a forced left turn lane in a very short trip in a fairly controlled environment with supremely good visual, road, and traffic conditions. LIDAR wouldn’t have helped the car here, there was no “whoops, confusining visibility”, it just completely screwed up and ignored the road markings.

    It’s been in this state for years now, of being surprisingly capable, yet horrible screw ups being noted frequently. They seem to be like 95% of the way there and stuck, with no progress in reality just some willfull denial convincing them to move forward anyway.




  • Navigation issue / hesitation

    The video really understates the level of fuck up that the car did there…

    And the guy sitting there just casually being ok with the car ignoring the forced left going straight into oncoming lanes and flipping the steering wheel all over the place because it has no idea what the hell just happened… I would not be just chilling there…

    Of course, I wouldn’t have gotten in this car in the first place, and I know they cherry picked some hard core Tesla fans to be allowed to ride at all…


  • The thing that strikes me about both this story and the thing you posted is that the people in the Tesla seem to be like “this is fine” as the car does some pretty terrible stuff.

    In that one, Tesla failing to honor a forced left turn instead opting to go straight into oncoming lanes and waggle about causing things to honk at them, the human just sits there without trying to intervene. Meanwhile they describe it as “navigation issue/hesitation” which really understates what happened there.

    The train one didn’t come with video, but I can’t imagine just letting my car turn itself onto tracks and going 40 feet without thinking.

    My Ford even thinks about going too close to another lane and I’m intervening even if it was really going to be no big deal. I can’t imagine this level of “oh well”.

    Tesla drivers/riders are really nuts…