The Communication Cube


Another MiSTer FPGA clone ($300):


1737902354833.png
 
If you're a fan of cables and history, maybe you like to watch a video with some of the creators of the 10BaseT standard:



Same channel has a video on setting up an ISP in the 80's.
 
@TeDaDeS Almost two-thirds of the body's copper is located in the skeleton and muscle [1,3].
We're on a list now, right? Just by watching the video?

I suspect the bucket goes up because the water (which is being pushed outwards), is holding on to the inside of the bucket, while going up (path of least resistance), dragging the bucket with it.

Looking at the paper: So this is how psychology pictures are made! (but why not a paper with the magnesium/aluminum?)
 
This interview clearly states why tech fell in favor of Trump.



Makes sense to me.

Mm... Well. I hardly stayed awake, because I'm not that interested in the USA, But it kind of says some of the startuper children who grew up to be rich (nobody asks all the many others who went bloke), without ever caring about politics, have discovered once they're rich that they are right wing and want political power because they don't like to be controlled by whatever the majority of the population thinks it's fair. So they use their economical power to influence the population to vote for leaders that will basically dismantle the government and give more power to powerful individuals.
News at eleven ? or eleventh century ? or eleven lakh years ago ? I'd bet one day they'll find the same story painted in some paleolithic cave. Plain people who used to say they were left wing become right wing once they get rich or powerful.

What I find kind of new is that they're so ignorant of politics that they don't understand how government works, not just they dislike the current way government works, but they even haven't thought what way should a government work, they're just more anarcocapitalists than they themselves can chew. They probably will kill one another and disappear with all sorts of collateral damage, but if they manage to screw everything up just the way they dream to do, then they'll find out they like it even less than what they had and blame anyone else.
 
Last edited:
The salient point to me was the Biden administration wanted the state to monopolize AI through regulation. They wanted to prohibit AI startups.

Also Andreesen started relatively poor then made it rich. How can you criticise that?
 
The salient point to me was the Biden administration wanted the state to monopolize AI through regulation. They wanted to prohibit AI startups.
Well, since corporations were using AI to evade all kind of laws and regulations (privacy, copyright, slander, liability, power demand and sustainability...) it isn't surprising.
AI is less powerful than it is advertised, but it's still powerful and disturbing, and any power will try to keep control and don't let others become more powerful. Government power too.
Government regulates nukes, weapons, finance, transport, customs, and lots of things. Why shouldn't it regulate AI? Just try to set up a startup to smuggle things into the country,
or refine uranium, or sell securities, or deploy a railway, and you'll see the government prohibiting you a lot of things.

The thing here is that IT in general grew a lot in a lowly regulated world, and it made sense while it affected mostly unmaterial things, but then came online commerce for physical objects,
exorbitant electricity use, accumulation of capital and therefore gentrification, and economic power, etc.

If Andreesen would have got rich in aeronauticcs, building or law, he would be more used to a playing field fenced by regulations. But he wasn't. So at a certain point
he went "how comes I can't do what I please with my money and my business! Help! Freedom for me!".

I'm more worried about prohibiting free software collaboration across countries because of sanctions than about regulating AI.

Also Andreesen started relatively poor then made it rich. How can you criticise that?
I don't criticise it. He was born in a born witth capitalistic rules, and whether he liked them or not at the beginning (I think he never thought too much about that) he played by the rules and sort of won, so
it's unsurprising that he ended up loving capitalism. Other tried and failed, so they maybe don't like capitalism now. But as long as we focus in one of the ones that "succeeded", then we may observe
a change of politics after a change of social class.

I don't say that's wrong in itself, just that it's unsurprising. Wrong would be to generalise his views and think they are better than anyone else's.
 
As whe are living on the "Border" of French (there is no Border anymore thanks to Schengen) , and whe have in the Saarland a good connection in our Hearth to French and Belgium, i did decide this morning to get me a good amount of Asterix Digital Comics to my Ipad Mini, the first 10..
As the Mini is my Pocket E-Reading Device its fits perfectly ..

Ditnt read anything yet as im at work, but it should work quite fine ..

The last Comic Series i did read on this thing where "The Stand", Stephen Kings Apocalyptic Epos and it did also work quite good..
 
Well, since corporations were using AI to evade all kind of laws and regulations
Actually they were trying to make money
exorbitant electricity use, accumulation of capital and therefore gentrification, and economic power, etc
Don't forget political power. If these things are bad the government should take action.
I'm more worried about prohibiting free software collaboration across countries because of sanctions than about regulating AI.
And putting an end to academic research.

Basically, Andreesen states the government fears AI because it will make them weaker, like social media and cryptocurrencies have, so wanted to bring it in-house. That would mean the end of innovation.
 
so wanted to bring it in-house. That would mean the end of innovation.
You are aware that you're not talking about some world government, right? Even if they had stifled innovation in the US, that might mean whatever, but sure as hell not the end of innovation.
 
It's not like the US AI companies are innovating in any way. They kept asking for more money and resources that any project ever had and delivered little to no improvements and a small Chinese company with a fraction of their resources was able to come up with something much more efficient at a fraction of the cost.

US tech companies have no incentive to innovate. They've grown to be huge monopolistic beasts, too big to fail, too big to jail, too big to care. They can buy or outspend any potential rival or just lobby government to make or keep competition illegal thanks to "IP" laws.

They only need some shiny projects to dangle in the media (VR, NFT, AI) to keep raising their share value and possibly get public funding but they never deliver what they promise (Self-driving cars are a joke, VR never took off...). They fire the people who do actual work and spend the money to buy back shares and make their shareholders richer.
 
Russian found Ukraine's weak spot, they now use cute donkeys for transport. So if Ukraine wants to stop supply lines they have to make the choice to kill or injure a donkey.

It's not like the US AI companies are innovating in any way. They kept asking for more money and resources that any project ever had and delivered little to no improvements and a small Chinese company with a fraction of their resources was able to come up with something much more efficient at a fraction of the cost.
Deepseek did use existing AI models to train their own model. So doing that trick somewhat hides the actual costs of creating their model. But the end result is indeed a cheap (hardware-wise) public LLM, but that it was created much cheaper compared to ChatGPT would be less true.
 
Innovation is bad. It's not terrible. It's not so bad that you must fear it. But it is bad, because it requires relearning and it threatens standarisation. Its intrinsic disadvantage must be compensated by something else good that particular innovation brings. And that is a case by case judgement, not at all a blanket assumption.

Corporations boast of innovation because innovation adds to the knowledge imbalance that helps the provider reap the customer. The innovator knows more about what he's selling than the customer, and he gets to explain what he wants about it. And they have built artificial IP laws and financial doctrine to shrine innovation. But that is about welding power to make money, not about anything useful. Innovation is bad, but it is worse for the customer (or the planet) than for the provider/innovator, so it gets added to the corporate arsenal.

So if a problem requires innovation, and the problem is worth solving, innovate. But if doing nothing is an option, or the problem is not that important, or there are good enough solutions, innovation is waste. It should be prohibited to innovate without a good checkout of the state of the art beforehand, to make sure you know if the problem is already solved instead of reinventing the wheel because you have more selfconfidence than knowledge. And in a fair world nobody should be able to impose their innovations to others. And to allow checking out the state of the art, all innovation should be public and open, never proprietary so that it adds to common knowledge, not to private arsenals.

Did you ever try to contribute to a big free software project and found it too difficult, because the maintainers care more for being able to keep maintaining the whole that for your particular useful fix or improvement ? Contribution is still possible, but it is harder than the initial assumption of one contributor before taking into account the situation of other members of the community. Well, that's how everything should be. The burden is on the innovator to explain clearly what is doing and convince all the rest who were already happy with what they had.
 
Last edited:
Russian found Ukraine's weak spot, they now use cute donkeys for transport. So if Ukraine wants to stop supply lines they have to make the choice to kill or injure a donkey.
They already were making the choice of killing or injuring humans, some (many?) of which where so forced to be there as the donkeys are. I'm not sure the particular species of the victim matters that much.
I mean: I understand you, that deciding whether to destroy an enemy drone is a much easier question than deciding whether t odestroy a enemy human or an enemy donkey.
But they were already destroying manned vehicles with its crew, so they're sadly beyond that point.


Deepseek did use existing AI models to train their own model. So doing that trick somewhat hides the actual costs of creating their model. But the end result is indeed a cheap (hardware-wise) public LLM, but that it was created much cheaper compared to ChatGPT would be less true.
And the existing AI models used existing human works (code, books, web pages...) to train the models. Often without proper permission. And those human authors used the whole humanity as teachers. So the cost of the USA models should also be higher than stated.

It's funny how OpenAI, DeepSeek and all choose to draw the lines where they please.
 
Back
Top