

Also in the case of high-rate data transfer, these plugs would probably add a lot of impedance (though that’s not a concern in this use case)


Also in the case of high-rate data transfer, these plugs would probably add a lot of impedance (though that’s not a concern in this use case)


NOT HOW IT WORKS


Well well well. If it isn’t the thing people are saying was going to happen for the last 7-10 years. Surely there was no way to prevent this.


Sword of ghostly might: One owned by a powerful warrior who came back as a vengeful spirit. Neglects to mention that the sword is also a ghost, and therefore can only deal damage to spirits.


On macos, that program is sometimes just Finder trying to calculate folder size


ssh only after a reboot doesn’t solve the problem, of course


I would love to watch this. He’d save the ship from capture by the Romulans and reestablish peace, entirely by accident, through a comedic series of mishaps.
Get tired and go home


Night in the Woods
Hollow Knight
CrossCode
Minecraft
Cult of the Lamb
Chicory (Lena Raine is goated with the sauce)
Kirby’s Dream Course
Gen 3 and 4 Pokemon games
Anything Toby Fox touches
Later Alligator
Stardew Valley
Guitar Hero: Modern Hits for the DS baybee
Pretty much any Nintendo game since the NES


My only experience is with gpu-side OpenGL, so here goes:
Your gpu is a separate device designed to run simple tasks with a staggering amount of parallelization. What does that mean? Basically every vertex and pixel on your screen needs to be processed before it can be displayed, and the gpu has a bunch of small cores that do all of that for every single frame your monitor outputs. A programmer defines all this using shaders. In OpenGL, the shader language is called GLSL.
In the opengl graphics pipeline, the cpu-side code defines which effects apply to which geometry in what order. For example, you may want to render every opaque object first, and then draw the transluscent objects on top with semi-transparency (this is called deferred rendering, and is a very common technique). Maybe you’d want a different shadow map for each light-emitting object. Maybe you’d want a setting to define how much bloom to draw to the screen. Maybe you want to provide textures for the gpu to access. The possibilities are endless.
On the gpu-side, we write code in shaders. The shaders, written in GLSL, get compiled by your device-specific drivers into the machine code your hardware uses. In OpenGL there are several types of shader, but there are two main ones: Vertex and Fragment shaders.
Vertex shaders run first. They run on every vertex in the scene and do that math that puts each vertex in the correct location. You can also assign varying values specific to each vertex that get passed down the pipeline to the next shaders.
Between the vertex and fragment shaders, the gpu automatically saves performance by removing any vertex that ends up off-screen, or any triangle that’s definitely not visible to the camera (this is called culling), and then fills in each triangle with pixels called fragments (in a process called rasterization). Each fragment will also have access to the varying value of it’s three vertices interpolated across the face of the triangle (ie the closest triangle will have the most influence).
After this, the fragment shaders are run on every pixel/“fragment” on screen - this is where you’d render effects like lightning and shadows and apply textures. The fragment shaders determine the color of the pixel as it appears on your screen.
There are other specialized shaders you can add too! But your gpu needs to be new enough to support them:
Ender portal countertop


You feeling alright, Gurney?


Conflooble the energy-motron!


5 out of 10 😭


iunno ¯_(ツ)_/¯


Redundancy is nice in the event of bitflip errors


I really loved Prodigy. It felt like the successor to Voyager
If you consider analog audio as data, then yeah all the time