Interesting Quotes
-
From: Chi paga davvero le tasse in Italia.
By: IlPost
I soldi lo stato li ottiene innanzitutto dalle tasse (imposte e contributi, circa 900 miliardi di euro). 3 grandi voci: - contributi sociali all'INPS, INAIL per pagare le pensioni di oggi. - IRPEF (imposta sul reddito). - IVA
Per capire se i 900 miliardi sono tanti o pochi si fa un rapporto tra entrate fiscali e il PIL. In Italia è il 42/43% del PIL. Questo dato paragonato alla media dei paesi dell'OCSE (32%) è sempre più alto.
Nel 2024 Su 59 milioni di abitanti solo 33 milioni hanno versato almeno 1€ di IRPEF. Il 43% non dichiara nessun reddito imponibile. Il 27% di quelli che la pagano, pagano l'80% di tutta l'IRPEF.
Il 94% del totale dell'IRPEF viene pagata da lavoratori dipendenti e pensionati.
L'aliquota sugli affitti è fissa (La cedolare secca) > 21% indipendentemente dalla dimensione dall'immobile.
L'IMU è principale tassa patrimoniale solo per il fatto di possedere un'immobile.
GAP fiscale: differenza tra quanto si incasserebbe se tutti pagassero lo tasse e il gettito effettivo incassato. La stima è 1€ evaso su 4 dovuto. Nella grandissima parte dell'evasione non avviene sul pagamento del reddito da lavoro dipendente e pensioni l'evasione è quasi nulla.
Negli ultimi anni si stima che solo un terzo dell'IRPEF dovuto sul lavoro autonomo e d'impresa è stata effettivamente versato. Secondo le stime quando un lavoratore autonomo viene assunto e diventa dipendente il suo stipendio raddoppia. Ma non perché prima guadagnava poco, perché prima poteva e aveva occasione di evadere, mentre ora non più.
L'evasione è più alta al sud Italia in confronto al Nord, ma perché in media le dimensioni delle aziende del sud Italia sono minori, e le statistiche dicono che sono più inclini ad evadere.
Si stima che se tutti pagassero le tasse lo Stato potrebbe incassare la stessa cifra che incassa adesso ma potendo abbassare del 20% le aliquote.
Ci sono 600 agevolazioni fiscali attive con un costo stimato di circa 100 miliardi di mancate entrate rispetto a quelle teoriche.
Negli ultimi 10 anni il GAP fiscale è diminuito ogni anno. Dai 100 miliardi del 2014, 96 miliardi nel 2021, era il 17% nel 2022.
-
From: Tobi Lutke's (Shopify's CEO) post on x.com
Economic growth is literally everything. People fondly remember the times of high GDP growth and treat the ones with low growth as the dark ages. Growth means positive sum thinking becomes the optimal strategy and this is the bedrock of civic society. We must build more companies and export products to make Canada better. No other mechanism exists. The schools are obscuring this fact and media often distorts it. Kudos to the Globe for getting it right here. Nature is healing?
-
“I'm not totally sure. How would you know? One thing that I think is interesting is if you ask me how big is the Internet.”
-
From: Sora the App, Sonnet 4.5 and the Question of Models as Processors (Stratechery Update 10-1-2025)
In this new competition, I prefer the Meta experience, by a significant margin, and the reason why goes back to one of the oldest axioms in technology: the 90/9/1 rule. - 90% of users consume - 9% of users edit/distribute - 1% of users create People create video, however, for others, and I'm just not sold that the AI video the Sora app enables, easy though it may be to make, is that interesting to anyone other than the creator.
-
From: The new Veo 3 paper from Google
Since these changes are applied frame-by-frame in a generated video, this parallels chain-of-thought in LLMs and could therefore be called chain-of-frames, or CoF for short. In the language domain, chain-of-thought enabled models to tackle reasoning problems. Similarly, chain-of-frames (a.k.a. video generation) might enable video models to solve challenging visual problems that require step-by-step reasoning across time and space.
-
From: Why your website should be under 14kB in size
Latency is the time it takes a packet of data to travel from its source to it's destination.
What is TCP slow start? #Your server doesn't know how much data the connection can handle — so it starts by sending you a small and safe amount of data — usually 10 TCP packets. If those packets successfully reach your site's visitor, their computer sends back an acknowledgement (ACK) saying the packets have been received. Your server then sends more data back, but this time it doubles the amount of packets. This process is repeated until packets are lost and your server doesn't receive an ACK. (At which point the server continues to send packets but at a slower rate).
Bandwidth is how much data can be transmitted over a network per unit of time. Usually it's measured in bits per second (b/s). Plumbing is a common analogy — think of bandwidth as how much water can come out of a pipe per second.
-
From: Charlie newsletter:
internet ha reso infinito lo spazio, ma non il tempo
-
From: with LLMs in the summer of 2025 (an update)
Tomorrow all this may change, but right now after daily experience writing code with LLMs I strongly believe the maximum quality of work is reached using the human+LLM equation.
I believe that humans and LLMs together are more productive than just humans, but this requires a big “if”, that is, if such humans have extensive communication capabilities and LLMs experiences: the ability to communicate efficiently is a key factor in using LLMs.
Information to include in the prompt…Clear goals of what should be done, the invariants we require, and even the style the code should have. For instance, LLMs tend to write Python code that is full of unnecessary dependencies, but prompting may help reducing this problem. C code tends to be, in my experience, much better.
-
From: Why LLM models allucinate
A language model is initially defined as a probability distribution over text and later prompts are incorporated (Section 3.2); both settings share the same intuition.
Then from ChatGPT: At its core, a language model is nothing more than a probability machine. Pθ (X1 ,X2 ,…,Xn ) This says: given parameters 𝜃(the model weights), the LM assigns a probability to every possible sequence of tokens: X1 ,X2 ,…,Xn . That's the distribution over text: it's a big map of “how likely is each sequence of tokens?”
From the paper: “Given the prompt 'The capital of France is', what's the probability that the next word is 'Paris', 'London', or 'Banana'?” It's tempting to blame hallucinations on the prefix: “Oh, the model fails because you asked it something obscure.” But the authors argue: that's not the real story. Humans also speak one word at a time — that doesn't mean human language is “just autocomplete.” Speaking word by word is simply the mechanics of generation, not the essence of what's going on.
hallucinations are baked into the statistical learning goal, not into the surface mechanics of word-by-word generation.
-
From: Writing system software: code comments.
…in general code should be read other than being executed, since is written by humans for other humans.
During my research I identified nine types of comments:
- Function comments: a form of in-line API documentation, in this way we make the API documentation close to the code: - The docs can be easily changed at the same time - The author of the change will also be the author of the API documentation change. - No context switching from code to docs - Design comments: how and why a given piece of code uses certain algorithms, techniques, tricks, and implementation. Used to explain, for example, why a very simple solution was considered to be enough for the case at hand
- Why comments: explain the reason why the code is doing something, even if what the code is doing is crystal clear.
- Teacher comments: Explains what's going to happen inside the function itself. They teach something in case the reader is not aware of such concepts, or at least provide a starting point for further investigation.
-
Checklist comments: tells you to remember to do things in some other place of the code. The general concept is:/* * Warning: if you add a type ID here, make sure to modify the* function getTypeNameByID() as well. */LinkSometimes what helps is to use defensive commenting in order to make sure that if a given code section is touched, it reminds you to make sure to also modify other parts of the code.
-
Guide comments: LinkUsed to babysit the reader, assist him or her while processing what is written in the source code by providing clear division, rhythm, and introducing what you are going to read.
clearly divide the code in isolated sections, an addition to the code is very likely to be inserted in the appropriate section
-
Trivial comments: A trivial comment is a guide comment where the cognitive load of reading the comment is the same or higher than just reading the associated code.
-
From: La colossale ingiustizia delle pensioni
“Negli anni 70 per i dipendenti pubblici uomini bastavano 19 anni” per andare in pensione
-
From: The Bitter Lesson versus The Garbage Can
And that is the Bitter Lesson — encoding human understanding into an AI tends to be worse than just letting the AI figure out how to solve the problem, and adding enough computing power until it can do it better than any human.
-
From: Counterculture to Cyberculture
The liberation of the individual was simultaneously an American ideal, an evolutionary imperative, and, for Brand and millions of other adolescents, a pressing personal goal.
At the same time, both the artists he met and the authors they read presented the young Stewart Brand with a series of role models. If the army and the cold war corporate world of Brand's imagination moved according to clear lines of authority and rigid organizational structures, the art worlds of the early 1960s, like the research worlds of the 1940s, lived by networking, entrepreneurship, and collaboration.
-
From: Design Considerations for an Anthropophilic Computer
This is an outline for a computer designed for the Person In The Street (or, to abbreviate: the PITS); one that will be truly pleasant to use, that will require the user to do nothing that will threaten his or her perverse delight in being able to say: "I don't know the first thing about computers," and one which will be profitable to sell, service and provide software for.
If the computer must be opened for any reason other than repair (for which our prospective user must be assumed incompetent) even at the dealer's, then it does not meet our requirements.
Computerese is taboo
It is expected that sales of software will be an important part of the profit strategy for the computer.
It should fit under an airline seat. It would be best if it were to have a battery that could keep it running for at least two hours when fully charged.
-
From: 1984 Macintosh Manual
Position the pointer on the System Folder icon and quickly press and release the mouse button twice.
-
From: Chi decide i prezzi delle case
“L'Italia è agli ultimi posti in Europa per nuove case costruite in rapporto alla popolazione”
-
From: Quando la sinistra è diventata pessimista
Essere ottimisti equivale a essere naive, a essere un pò ingenui, a non saperne abbastanza e nessuno di noi vuole fare quell'impressione
Essere pessimisti sul futuro per certi versi è diventato un modo per darci un identità, ma non è davvero la nostra, è un identità conformista, presa in prestito dagli altri dai quali vogliamo essere accettati
-
From: Against "Brain Damage" of One Useful Thing - ETHAN MOLLICK
…So how do you get AI's benefits without the brain drain? The key is sequencing. Always generate your own ideas before turning to AI. Write them down, no matter how rough. Just as group brainstorming works best when people think individually first, you need to capture your unique perspective before AI's suggestions can anchor you. Then use AI to push ideas further…
…there is a paradox: while AI is more creative than most individuals, it lacks the diversity that comes from multiple perspectives. Yet studies also show that people often generate better ideas when using AI than when working alone, and sometimes AI alone even outperforms humans working with AI. But, without caution, those ideas look very similar to each other when you see enough of them
The problem is that even honest attempts to use AI for help can backfire because the default mode of AI is to do the work for you, not with you.
Ultimately, it is how you use AI, rather than use of AI at all, that determines whether it helps or hurts your brain when learning
Human judges rating the ideas showed that that ChatGPT-4 generated more, cheaper and better ideas than the students. The purchase intent from these outside judges was higher for the AI-generated ideas as well
-
From: How the Gay Rights Movement Radicalized and Lost Its Way
How did a movement that began with sexual liberation end up doing that?
By drift, activist extremism, a social media bubble and suppression of free debate. Soon enough, the right began associating what used to be the lesbian and gay movement with this gender extremism, and the L.G.B.T.Q. movement responded not by moderating tone or substance but by closing ranks, seemingly determined to prove its point.
The idea that we would tell other people what words they can use, shut down speakers, criticize journalists and threaten others into silence was once absurd. Yet these are now the signature tools of the L.G.B.T.Q. movement. They do not seek to engage or persuade opponents; they seek to demonize, bully or cancel them.