• 9 Posts
  • 474 Comments
Joined 2 years ago
cake
Cake day: August 4th, 2023

help-circle
  • This was a developed-in-house e-commerce web application at a major e-retailer. So fortunately that monstrosity of a cookie-handling mess was only ever used by one company.

    You know what, though? Talking about this reminds me of another story about the same e-commerce application.

    After a customer placed an order on this e-commerce site, the company’s fraud department had to evaluate the order to make sure it wasn’t fraudulently placed. (As in, with a credit card not owned or authorized for use by the purchaser.) Once that was done, the order had to be communicated to a worker at the warehouse so they could pack the right items into a box, put on a shipping label, and set the box aside to be picked up by the UPS truck which would come once a day near the end of the day.

    The application used by the fraud department and the application that displayed new orders to warehouse workers was one and the same application. Whether a user had fraud-evaluating powers or pack-items-in-boxes powers just depended on what permissions their particular user had. (That may have been decided by LDAP groups. I don’t remember for sure.)

    Meanwhile, the e-commerce site offered gift cards for sale online. The gift card would be shipped to the customer. And there was a box where you could write a message associated with the gift card. So, for instance, someone could buy a gift card to be sent to their nephew’s address or whatever and include a little note like “Happy Birthday. Don’t spend it all at once.” or whatever. And the fraud/pick-and-pack application would display all details of the order including any messages associated with the gift cards.

    Well, I found a stored cross-site scripting vulnerability where if you put <script>...</script> tags with some JavaScript in the gift card message box and completed the order, the JavaScript would execute any time someone viewed the details page for the order in the fraud/pick-and-pack application. And of course, the JavaScript could do within that application just about anything the user could do with their given permissions.

    The main danger was that a malicious actor with sufficient knowledge of how our fraud application worked could place an order fraudulently with someone else’s credit card and include in the order a gift card with a malicious JavaScript payload in the message box, and then that malicious JavaScript could automatically mark the order “a-ok, no fraud here” when a fraud department worker loaded the order details page, letting the order be fulfilled without any actual fraud review.

    The fix was pretty simple. Just stick a <c:out>...</c:out> in the appropriate place in the fraud/pick-and-pack application code. But it was an interesting example of a vulnerability in a not-customer-facing application that could none-the-less be exploited by any public customer/user without any particular special access.

    If you’re interested in one more interesting story about the same e-commerce application, see this comment I made a while ago.



  • Java webapp. Customer facing. E-commerce application, so in PCI scope and dealt with credit card info and such.

    There was one specific cookie that stored some site-wide preference for the customer. (Why not just put that preference in the database associated with the user? Because that would make too much sense is why.)

    But the way they encoded the data to go into the cookie? Take the data, use the Java serialization framework (which is like Python’s “Pickle” or Go’s “Gob”) to turn that into a string. But that string has binary data in it and raw binary data is kindof weird to put in a cookie, so you base64 encode the result. (The base64 encoding was the only sane step in the whole process.) Then you do the reverse when you receive the cookie back from the browser. (And no, there was no signature check or anything.)

    The thing about the Java serialization framework, though is that decoding back into Java objects runs arbitrary object constructors and such. As in, arbitrary code execution. And there’s no checking in the deserialization part of the Java serialization framework until your code tries to cast the object to whatever type you’re expecting. And by that point, the arbitrary code execution has already happened. In short, this left a gaping vulnerability that could easily have been used to extremely ill effect, like a payment information breach or some such.

    So all a malicious user had to do to run arbitrary code on our application server was serialize something, base64 encode it, and then send it to our servers as a cookie value. (Insert nail biting here.)

    When we found out that there was a severe vulnerability, I got the task of closing the hole. But the existing cookies had to continue to be honored. The boss wasn’t ok with just not honoring the old cookies and developing a new cookie format that didn’t involve the Java serialization framework.

    So I went and learned enough about the internal workings of how the Java serialization framework turned a Java value into a binary blob to write custom code that worked for only the subset of the Java serialization format that we absolutely needed for this use case and no more. And my custom code did not allow for arbitrary code execution. It was weird and gross and I made sure to leave a great big comment talking about why we’d do such a thing. But it closed the vulnerability while still honoring all the existing cookies, making it so that customers didn’t lose the preference they’d set. I was proud of it, even though it was weird and gross.

    The value that was serialized to put into the cookie? A single Java int. Not a big POJO of any sort. Just a single solitary integer. They could just as well have “serialized” it using base-10 rather than using the Java serialization framework plus base64.


  • The costs of distribution aren’t really that expensive for big companies.

    You can’t really trust that users are going to be willing to donate hard drive space and upload bandwidth to help your maps service or whatever work. (Though, to be fair, you did mention things like OpenStreetMap which is probably more likely for users to be willing to support that way.)

    Bittorrent isn’t something you can seamlessly integrate into browser-based apps.

    But also, there are newer technologes based on a very Bittorrent-like P2P way of doing things. IPFS is basically reskinned Bittorrent. And Peertube uses in-browser P2P to distribute videos. I don’t think there’s any standard in, say, HTML5 that allows for P2P without some hacks, but it sounds like there’s a good chance such a standard is likely to make its way into browsers in the relatively near future. Also, it sounds like Chrome supports more than Firefox in that area right now.





  • Not exactly the densest material out there, but pennies are cheap and easily procured. May not be quite what you’re looking for for your use case. (You asked about “cost/weight ratio” and “weight to space” which makes it sound like you’re looking to add a lot of weight.)

    I’ve been known to make a fully-enclosed cylindrical cavity and set my slicer to pause at exactly the right layer to where I can drop a few stacks of pennies into the print before upper layers seal the cavity closed.




  • I skimmed it to find the parts where it talked about why LLMs aren’t useless. Basically the only place it talks about why they aren’t useless is the section “…, and sophists are useful”:

    If I use a LLM to help me find a certain page in a document, or sanity check this post while writing it, I don’t care “why” the LLM did it. I just care that it found that page or caught obvious mistakes in my writing faster than I could have.

    So, I’m supposed to wade through the BS and hallucinations to find these nuggets of helpful feedback rather than just proofreading it myself? That’s a pretty weak use case.

    I don’t think I need to list the large number of tasks where LLMs can save humans time, if used well.

    So he’s basically admitting he can’t come up with any actually good uses. “Pay no attention to the man behind the curtain.”

    By all means, use LLMs where they are useful tools: tasks where you can verify the output, where speed matters more than perfection, where the stakes of being wrong are low.

    There’s no universe where such a use case exists in a way that isn’t actively harmful or at least “brain rot”-y to anyone consuming the content created by the LLM user. This is why AI slop exists.

    In short, “yes it does.”