

Sure, just be aware that unripe tomatoes (not just the stems) are also toxic to cats.
Sure, just be aware that unripe tomatoes (not just the stems) are also toxic to cats.
You can’t just consider the cheese! You gotta look up all the ingredients!
Consensus: hold the tomato! Otherwise, if there’s no seasoning, everything else is acceptable in small amounts.
Glaring doesn’t imply a negative meaning. In this case it’s used to mean “obvious”.
Unless you’re suggesting that “glaring” means “obviously staring” (it doesn’t - that would be “glaringly staring”) this doesn’t make any sense.
“[He’s] glaring at [direct object]” is an example of a sentence that uses the present participle form of the verb “glare,” which explicitly communicates anger or fierceness.
If you’re not convinced, read on.
—————
The verb form that takes an object is:
Glare (verb with object): to express with a glare. They glared their anger at each other
The noun form the above definition references is:
Glare (noun): a fiercely or angrily piercing stare.
“Glaring” can be an adjective and one of those definitions does mean “obvious” or “conspicuous,” but the use of that form of the word doesn’t make sense in her sentence. Think about a comparable sentence like “The undercover operative is conspicuous at the bar,” where the bar is the location. (Even then, most people wouldn’t use “glaring” in that sentence, as “conspicuous” or “obvious” are much less ambiguous; the operative could be staring piercingly or angrily at the bar rather than being glaring while being at the bar.) Another example that makes a bit more sense is “The effect of the invasive plants is glaring at the park.”
But for that interpretation to be valid here, you’d have to:
That’s a bit of a stretch.
This is what I would try first. It looks like 1337 is the exposed port, per https://github.com/nightscout/cgm-remote-monitor/blob/master/Dockerfile
x-logging:
&default-logging
options:
max-size: '10m'
max-file: '5'
driver: json-file
services:
mongo:
image: mongo:4.4
volumes:
- ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
logging: *default-logging
nightscout:
image: nightscout/cgm-remote-monitor:latest
container_name: nightscout
restart: always
depends_on:
- mongo
logging: *default-logging
ports:
- 1337:1337
environment:
### Variables for the container
NODE_ENV: production
TZ: [removed]
### Overridden variables for Docker Compose setup
# The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS
# and manage TLS certificates
INSECURE_USE_HTTP: 'true'
# For all other settings, please refer to the Environment section of the README
### Required variables
# MONGO_CONNECTION - The connection string for your Mongo database.
# Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
# The default connects to the `mongo` included in this docker-compose file.
# If you change it, you probably also want to comment out the entire `mongo` service block
# and `depends_on` block above.
MONGO_CONNECTION: mongodb://mongo:27017/nightscout
# API_SECRET - A secret passphrase that must be at least 12 characters long.
API_SECRET: [removed]
### Features
# ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
# See https://github.com/nightscout/cgm-remote-monitor#plugins for details
ENABLE: careportal rawbg iob
# AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
# When readable, anyone can view Nightscout without a token. Setting it to denied will require
# a token from every visit, using status-only will enable api-secret based login.
AUTH_DEFAULT_ROLES: denied
# For all other settings, please refer to the Environment section of the README
# https://github.com/nightscout/cgm-remote-monitor#environment
To run it with Nginx instead of Traefik, you need to figure out what port Nightscout’s web server runs on, then expose that port, e.g.,
services:
nightscout:
ports:
- 3000:3000
You can remove the labels as those are used by Traefik, as well as the Traefik service itself.
Then just point Nginx to that port (e.g., 3000) on your local machine.
—-
Traefik has to know the port, too, but it will auto detect the port that a local Docker service is running on. It looks like your config is relying on that feature as I don’t see the label that explicitly specifies the port.
There’s a whole history of people, both inside and outside the field, shifting the definition of AI to exclude any problem that had been the focus of AI research as soon as it’s solved.
Bertram Raphael said “AI is a collective name for problems which we do not yet know how to solve properly by computer.”
Pamela McCorduck wrote “it’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, but that’s not thinking” (Page 204 in Machines Who Think).
In Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter named “AI is whatever hasn’t been done yet” Tesler’s Theorem (crediting Larry Tesler).
https://praxtime.com/2016/06/09/agi-means-talking-computers/ reiterates the “AI is anything we don’t yet understand” point, but also touches on one reason why LLMs are still considered AI - because in fiction, talking computers were AI.
The author also quotes Jeff Hawkins’ book On Intelligence:
Now we can see the entire picture. Nature first created animals such as reptiles with sophisticated senses and sophisticated but relatively rigid behaviors. It then discovered that by adding a memory system and feeding the sensory stream into it, the animal could remember past experiences. When the animal found itself in the same or a similar situation, the memory would be recalled, leading to a prediction of what was likely to happen next. Thus, intelligence and understanding started as a memory system that fed predictions into the sensory stream. These predictions are the essence of understanding. To know something means that you can make predictions about it. …
The human cortex is particularly large and therefore has a massive memory capacity. It is constantly predicting what you will see, hear, and feel, mostly in ways you are unconscious of. These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence.
If Searle’s Chinese Room contained a similar memory system that could make predictions about what Chinese characters would appear next and what would happen next in the story, we could say with confidence that the room understood Chinese and understood the story. We can now see where Alan Turing went wrong. Prediction, not behavior, is the proof of intelligence.
Another reason why LLMs are still considered AI, in my opinion, is that we still don’t understand how they work - and by that, I of course mean that LLMs have emergent capabilities that we don’t understand, not that we don’t understand how the technology itself works.
We are. Why do you think we stopped?
I thought Hue bulbs used Zigbee?
The up arrow moves through the letters, e.g., A->B->C. The down arrow moves to the next character in the sequence, e.g., C->CA->CAA. If you click past the correct letter, you’ll have to click all the way through again. And if you submit the wrong letter, you have to start all over (after it takes twenty seconds attempting to connect with the wrong password and then alerts you that it didn’t work, of course).
OP is also in the allegedly ultra rare camp of “successfully configured Jellyfin and lived to tell the tale.” Not what I’d expect of someone unable to configure Plex correctly. I’ve not set up a Plex server myself but my guess is it wasn’t clear that it was misconfigured - it did work previously, after all.
If they’re calling it remote streaming when you’re on the same (local) network, that’s not exactly intuitive. I’d say OP’s phrasing was fair.
The witch turned the creep into a woman and the spell was complete by the time she flew away. Unfortunately, like many women, the creep was born with the body of a man (she’s AMAB). Maybe the witch could have changed her body, too, but that would have made things far too easy, given that the point of the curse was to teach her empathy.
You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.
This is an interesting parallel, but I feel like I missed some key part of it.
In the US, at least, we historically killed off a lot of deer’s natural predators - mostly wolves - and as a result, the deer population can get out of control, causing serious problems to the ecosystem. Hunters help to remedy that. The relatively small violences that they perform on an individual basis add up to improving the overall ecosystem.
That isn’t the same as being a bigot, or a sexist, or a fascist… and I don’t know why anyone would assume that a person holds those views because they’re mean and petty. They hold those views for a variety of reasons - sometimes because they’re a child or barely an adult and that’s just what they learned, and they either don’t know any better or haven’t cared enough to think it through; sometimes because they’ve been conditioned to think that way; sometimes because they’re sociopaths who recognize that it’s easier to oppress that particular group.
It doesn’t really matter what their reason is. Either way, they’re a worse person because of it, and often they’re overall a bad person, regardless of the rest of their views, actions, and contributions.
Being a hunter, by contrast, is neutral leaning positive.
It makes sense that a rational person who loves being in nature, who loves animals, who wants their local ecosystem to be successful, would as a result want to help out in some small way, even if that means they have to kill an animal to do so. It doesn’t make sense that a rational person who loves all people, who wants their local communities to be successful, would as a result want to oppress and harm the people in already marginalized groups.
I don’t think equating being bigoted with holding unjustifiable opinions does it justice. The way we use the word opinion generally applies to things that are trivial or unimportant, that don’t ultimately matter, e.g., likes and dislikes. Being a bigot is a viewpoint; it shapes you. For many bigots, their entire perspective is warped and wrong. And there’s a common misunderstanding that you can’t argue with someone’s opinions; because it’s just how they “feel.” But being a bigot, whether you’re sexist, racist, transphobic, queerphobic, homophobic, biphobic, etc., is a belief, and it’s one that, in most cases, the bigot chooses (consciously or not) to keep believing.
If an adult with functioning cognitive abilities refuses to question their bigoted beliefs, then they’ve made a choice to be a bigot.
Most anti-car people are in favor of improving public transit options.
Wow, there isn’t a single solution in here with the obvious answer?
You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.
Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:
On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.
If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.
Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.
If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.
Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)
If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.
For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:
This is why I run a lot of Q4_K_M 70B models on two 3090s.
Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.
TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.
I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.
I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.
Not directly, but indirectly. You gerrymander the district-based positions, which allow you to pass legislation enabling you to suppress enough votes to win the statewide elections, too.
Illegal vote suppression elected Trump, but even if it hadn’t, you should blame Democrats before blaming people who voted for third party candidates. Now, if you’re talking about people who “protest voted” by voting for Trump (in both the primaries and the election), then sure. Those people did, in fact, play an instrumental part in electing him.
Why blame Democrats? Well, beyond just kinda being Republican-lites:
Democrats are the bare minimum “harm reduction” party, and I don’t bare any ill will toward people who voted for them rather than a party that would actually try to effect change, but the opposite mindset - blaming third party voters for not voting for Democrats - is very shortsighted. And as third party voters have never had the power to enact RCV or STAR voting or otherwise improve the system, blaming them instead of the Democrats who have had that power is inane.
I’ve voted for a Democrat every single presidential election that I’ve been able to, but I honestly wish I hadn’t. I’d much rather there be more visibility for third parties, and for more people to feel empowered to vote for third party candidates.