Delta has a long-term strategy to boost its profitability by moving away from set fares and toward individualized pricing using AI. The pilot program, which uses AI for 3% of fares, has so far been “amazingly favorable,” the airline said. Privacy advocates fear this will lead to price-gouging, with one consumer advocate comparing the tactic to “hacking our brains.”
I have predicted this for a while now. As this will take effect, the airline no longer have responsibility for what sets the prices. The AI could for instance become very racist, driving prices through the roof for colored people if it somehow determines that well-paying racist customers will pay more to fly with only white people. Several scenarios like that could unfold, and since LLMs are basically impossible to get the source values for their decissions, no one can be held responsible for such choices.
Oh, and I’m sure the data from 23andMe will be abused soon to ensure that only healthy people get good prices. The personal data which “didn’t matter that we shared” is about to unfold.
I haven’t seen inside their system but the chances of it being an LLM are close to zero, least of all because LLMs are notoriously unreliable at calculating numbers. It’s far more likely that they’re saying “AI” because shareholders, and it’s actually something closer to traditional ML.
Sure, I accidentally use AI and LLM interchangeably. But I believe the point still stands. If they were asked to trace the source of the price difference, it likely exists within layers upon layers of training data aimed at maximizing profits, and it would probably be impossible to give an answer as to what data has been used to produce the result in the long run.