

Yeah, gotta jump to the 13 (waiting for mine with a ryzen 7 350 now).
FWIW, they had very specific goals with the 12 and outlined the reasoning in a video.
Yeah, gotta jump to the 13 (waiting for mine with a ryzen 7 350 now).
FWIW, they had very specific goals with the 12 and outlined the reasoning in a video.
Looks like it is provided here.
Waiting on my 13 (ryzen ai 7 350). Hope they don’t claw back for a price hike…
Scrubbing a little demo project I made featuring a web app behind oauth2-proxy leveraging keycloak as local idp with social login. It also uses a devcontainer config for development. The demo app uses the Litestar framework (fka starlite, in Python) because I was interested, but it’s hardly the focus. Still gotta put caddy in front of it all for easy SSL. Oh, and clean up all the default secrets I’ve strewn about with appropriate secret management.
All of it is via rootless podman and declarative configuration.
Think I might have to create my own Litestar RBAC plugin that leverages the oauth headers provided by the proxy.
It has been a minute since I worked daily in this space, so it has been good to dust off the cobwebs.
Definitely looks like a nice improvement. Functions very like cloud provider CLI SSO, but with a generic tool.
I think for an enterprise use case, supporting the use of the groups claim (or other configurable scopes) is table stakes. Although in those situations, I’ve also had to use other tools like teleport that come with other enterprise niceties like full session audit capture and playback.
And while everyone should do their own threat and risk modeling, you’ve now made your ssh connection dependent on an external service that likely needs to reach out over the internet.
As a reminder, Tesla is “direct to consumer” and the “dealership” is corporate owned!
So when the party of conspiracy theorists claims conspiracy against them, you can rest assured it’s a conspiracy by them.
But that’s what they’d say if the the tables were turned, so fuck 'em and their shit. I hope the insurance adjuster finds a way out and leaves them holding the bag.
20 yrs ago (fuck, I guess it was), I got to 40wpm on Dvorak and 60wpm on colemak. But it was such a a pain in the ass for everything else that I gave up.
Still regret it.
Hey I have extra keyboards and time now…
Fire dragon here and yeah, sometimes Google won’t even let me log in either.
Why don’t they have free will?
How about: there’s no difference between actually free will and an infinite universe of infinite variables affecting your programming, resulting in a belief that you have free will. Heck, a couple million variables is more than plenty to confuddle these primate brains.
Seems satisfactory to me.
To a significant extent, they do, contracting for construction of generation and transmission (very often renewable), at least at the largest scale.
But, it’s (mostly) all on the grid.
With demand like that, it’s not like there isn’t significant negotiation with the local power company, especially because they’re frequently built a significant distance from existing large power infrastructure.
Heck, all the big 3 cloud providers signed deals for nuclear generation in the last few months. https://spectrum.ieee.org/nuclear-powered-data-center
Here’s just one more article about these sorts of investments: https://www.canarymedia.com/articles/clean-energy/google-has-a-20b-plan-to-build-data-centers-and-clean-power-together
Yeah, I can see how it would be confusing. Your internal sound “card” is managing several outputs (profiles) and settings are per device rather than profile.
I’m sure there’s a way to detect the HDMI plug/unplug and script an action to un/mute the audio. If you’re connected to the other external speaker, that action could be confused, so you’d have to account for that.
I’m not expert in that department, but udev is where I’d start.
Here’s a link to someone trying to trigger action on HDMI events that could get you started down the right path: https://forums.raspberrypi.com/viewtopic.php?t=343614
I understand what you’re getting at, but I feel like that’s doing a disservice to plasma. They absolutely can (software!) display what they want, but it would require a paradigm shift.
I’ve struggled with this sort of thing for many years. Multiple audio devices (switching between speakers/headphones/headset), complex input/output schemas (e.g. audio passthrough for a console mixed with the podcast on my PC, or from the endurance race on a Chromecast (obligatory “🖕 peacock”) mixed with the game I’m playing on the PC), echo cancellation between various sources and the selected output, etc.
Audio management is complex, but I think OP is getting at one of the weaker points in “year of the Linux desktop” adoption.
I’m managing only because I’ve spent so much time figuring things out over nearly 20yrs of Linux use. My setup is currently a combination of plasma (I think the app is just “volume”), qpwgraph, and individual app settings.
I can’t even get mdns to work with systemd-resolved and a local VM.
Best of luck though, definitely something I’ll be watching!