КРИМський бандерівець ([syndicated profile] crimea_ua_feed) wrote2025-08-23 12:50 pm

Повномасштабне вторгнення росії. Випуск #1276 за 23.08.2025 «За український прапор рашисти катують,

Posted by КРИМський бандерівець

1.     Рух “Жовта Стрічка” прикрасив українськими прапорами тимчасово окуповані міста Донецької, Луганської, Запорізької, Херсонської областей і Криму. Євпаторія, Севастополь, Донецьк, Луганськ, Жданівка, Генічеськ, Бердянськ, Маріуполь та інші міста святкують День українського прапора разом з усією країною. Не забувайте, що ніхто не чекає на деокупацію так сильно, як люди, які вимушені жити в окупації! МИ – […]
selenga: (Default)
Selenga ([personal profile] selenga) wrote2025-08-23 03:43 pm
Entry tags:

1 075 160

Общие боевые потери РФ с начала войны - около 1 075 160 человек (+840 за сутки), 11 129 танков, 31 858 артсистем, 23 164 боевые бронированные машины. ИНФОГРАФИКА

Источник: https://censor.net/ru/n3570114
chuka_lis: (Default)
chuka_lis ([personal profile] chuka_lis) wrote2025-08-23 01:15 am
Entry tags:
chuka_lis: (Default)
chuka_lis ([personal profile] chuka_lis) wrote2025-08-22 11:27 pm
Entry tags:

Задним числом

Меня умиляет забота о  рядовом человеке. Тепло вспоминаю рапорты о качестве питьевой воды, Read more... )
ratomira: (Default)
ratomira ([personal profile] ratomira) wrote2025-08-23 10:16 am
Entry tags:

Почему бесится кацапня?

В Дриме активизировалось кацапьё, ссылки кидает на кацапские пропагандистские посты.

Кацапьё бесится, что украинские дроны и ракеты сейчас каждую ночь летят в рашку и что-то успешно атакуют. В основном атакуют а) НПЗ, поэтому нефтепродуктов у рашки становится всё меньше и меньше, б) нефтепроводы, из-за чего нарушается логистическая цепочка доставок нефти, в) железнодорожные узлы (останавливаются поезда).

Украина наладила выпуск ракет "Фламинго" (до 3000 км дальности), которые теоретически способны долететь до Новосибирска, не говоря уже про всякие тюмени.

А рашенское ПВО, в принципе, дырявое, его считай и нет, большую территорию оно не покроет.

И уже начались проблемы с бензином и дизтопливом в некоторых отдаленных рашенских регионах.

Если такая хорошая тенденция будет продолжаться, то скоро и перебои с продуктами, с товарами будут, особенно там, где нет железной дороги или рек. Для кацапни все товары будут дороже и дороже. Бесись, кацапьё, бесись. Есть чему беситься.
redis: (Default)
redis ([personal profile] redis) wrote2025-08-23 10:33 am
Entry tags:

Маннергейм же смог!

Интересно, что в среде z-либералов и прочих всёнетакоднозначников набирают популярность рассказы о мудром Маннергейме, давшем согласие на перемирие в 1940 году.

Причина вполне понятна, не будучи в силах требовать мира от своих вождей, z-либералы перекладывают ответственность за продолжение войны на жертву, отказывающуюся капитулировать и подарить им мир.

Финляндия же смогла! - восклицают они - Почему же Украина упрямится? Пора положить конец кровопролитию! И пусть с нас поскорее снимут полезные санкции!

Даже Лавров начал публично лаврать о подписании Финляндией вечного нейтралитета в конце войны.

В чем они неправы, кроме самого факта своего существования?

Формально, Финляндия действительно отскочила с не такими уж большими потерями. Советская база на полуострове Ханко, один крупный город и перенос границы взамен на вечный мир? Чем плохо, правда?

Да ничем! - скажем мы, если знаем историю Второй мировой лишь по учебникам Мединского.
Read more... )
tiresome_cat: (AngryCat)
tiresome_cat ([personal profile] tiresome_cat) wrote2025-08-23 09:30 am

Гигиенические процедуры

Давно в Дриме не лезли в коменты рашкоботы и вот опять. Интересно, они действительно рассчитывают что я буду вести с ними какие-то беседы, вместо того, чтобы сразу удалить и забанить? Их высеры всё равно никто не увидит, так как комменты от "чужих" у мня скринятся. Так что чекистские содержанки сегодня совершенно напрасно ели свой лубянский хлеб с тараканами :)
chuka_lis: (Default)
chuka_lis ([personal profile] chuka_lis) wrote2025-08-22 10:32 pm

A tip from Celeste

 I was thinking of a way to show students how to use color theory easier. So, I made my circular palette and used to hand these out at seminars This makes color theory without having to learn all the unnecessary mumbo-jumbo. Today I add more info from what I learned from Bob Burridge when deciding colors.
1. 80% of surface= 3 analogus colors=3 colors next to each other on this palette.
2. 10% near focal area of complement color = color opposite of the ‘middle’ color of the three analogus colors you chose.
3. 5% each of the discords= 2nd color on each side of complement …….use at focal point sparingly.
Then use #2 and #3 to mute the main 3 analogus colors (#1) as you paint outwards away from the focal area.
4-D129057-DE3-B-4597-8554-86-C5-E09-A9-EF6
chuka_lis: (Default)
chuka_lis ([personal profile] chuka_lis) wrote2025-08-22 08:58 pm

Що занадто, то надздраво

Мудрая пословица,  и  она касается всего. Все хорошо в меру. Конечно, с учетом того, что мера у каждого слегка отличается.Примерно в той  же степени, как и все другие черты и особенности, имеют отличия.
Если не соблюсти меру,  возможно не только не получить нужное или желаемое, но и наоборот, получить вред. Не только получить (себе), но и нанести (другим).
Как у Парацельса- "всякая вещь есть и яд, и не яд, именно доза определяет эффект или отсутствие ядовитости"Read more... )
Schneier on Security ([syndicated profile] bruce_schneier_feed) wrote2025-08-22 07:00 pm

I’m Spending the Year at the Munk School

Posted by Bruce Schneier

This academic year, I am taking a sabbatical from the Kennedy School and Harvard University. (It’s not a real sabbatical—I’m just an adjunct—but it’s the same idea.) I will be spending the Fall 2025 and Spring 2026 semesters at the Munk School at the University of Toronto.

I will be organizing a reading group on AI security in the fall. I will be teaching my cybersecurity policy class in the Spring. I will be working with Citizen Lab, the Law School, and the Schwartz Reisman Institute. And I will be enjoying all the multicultural offerings of Toronto.

It’s all pretty exciting.

vak: (U.S.A.)
Serge Vakulenko ([personal profile] vak) wrote2025-08-22 11:47 am
Entry tags:

Time for Europe to Get Off Its Ass

(перепощу целиком, оно того стоит)

It's August 2025. Biden is history. Trump is back. And after three years of war, one thing is beyond obvious: Europe still has no plan.

Billions have been spent. Headlines have been written. Security "guarantees" have been announced and re-announced. But on the ground in Ukraine, what we have is a war of attrition — and a continent still improvising its way toward defeat.

The U.S. Is Out — By Choice

Let's stop pretending. Whatever comfort Europeans took from the idea of "unshakable American support" is gone. Trump has made that crystal clear. He drags his feet on every shipment. He treats Ukraine like a bargaining chip. And when he isn't stalling, he's running interference for Moscow — signaling weakness and chaos that Putin reads as opportunity.

Europe has to understand this: the United States is no longer a partner to be counted on. Any plan that assumes Washington will lead is worse than naïve — it's dangerous. From here forward, U.S. help, when it comes, is a bonus, not a backbone. The future of Ukraine is Europe's responsibility now, or there is no future at all.

Diplomacy Is Not a Strategy

The second illusion is that clever diplomacy will somehow end this war. That if we talk long enough, Putin will blink, or that "security guarantees" without actual firepower will change anything.

That fantasy needs to die.

Putin isn't negotiating for peace. He's buying time — to rebuild his army, to fortify occupied territory, and to wait out Western fatigue. Every delay, every soft promise, every meaningless communiqué hands him that time. And every day without a plan costs Ukrainian lives.

Europe Needs a Real Plan — Now

The blueprint already exists. In The Shield and Denial Strategy and The Ukraine Decision, I've laid out the industrial framework Europe needs: mass air defense production, sustained artillery supply chains, co-production facilities in Ukraine, and enforcement mechanisms that actually work. The details are there.

The cost is modest: €200 per European per year — less than half a percent of GDP.

The deliverables are clear:

Air defense at scale, so Russia's missile and drone terror campaigns fail.

Artillery and drone parity, so Russian offensives collapse by default.

Co-production in Ukraine, to shorten logistics lines and political cycles.

The only thing missing is the political will to execute.

Physics Doesn't Negotiate

Moscow doesn't care about rhetoric. It doesn't care about communiqués or hashtags. What Moscow fears is industrial reality:

Interceptor stockpiles measured in months, not days.

Two million shells delivered on schedule, month after month.

Drones at scale, integrated with precision targeting.

Energy resilience that keeps Ukraine's grid above 95% uptime, even under winter barrages.

When those numbers start moving in the right direction, the Kremlin will notice. Not because Putin suddenly grows reasonable, but because physics doesn't negotiate. When every offensive fails, when the cost of holding territory rises every quarter, when Western fatigue is off the table — that's when Russia's strategy collapses.

Stop Throwing Money — Start Building Discipline

Europe's problem isn't resources. It's discipline. The continent has thrown billions at Ukraine — but in scattershot bursts, without coherent timelines, without enforceable milestones, without accountability.

Take Germany's delayed Leopard tank deliveries in early 2024, or France's stop-start CAESAR howitzer shipments. Each delay sends the same message to Moscow: Europe talks tough but delivers weak.

Drift isn't neutral. Drift is surrender by installments.

Decision Time

This is the moment for Europe to decide whether it wants to win this war or pretend to try. The steps are painfully obvious:

Pass three-year funding laws that auto-disburse, removing politics from logistics.

Establish a European Defense Production Board with teeth to enforce contracts and delivery schedules.

Build and maintain a public delivery dashboard that voters — and Moscow — can see, tracking air-defense systems, shells, drones, and production capacity in real time.

No More Illusions

This war will not be won by speeches, hashtags, or diplomatic fantasies. It will be won by a plan: measurable, predictable, industrial.

Europe has the money. Europe has the factories. Europe even has the blueprint. The only question remaining is whether Europe can afford not to act.

The day Europe executes a real plan, the day predictable timelines start moving metal and men at industrial scale, is the day Moscow realizes the war it thought it could outlast is the war it can no longer win.

That day cannot come soon enough. The question is: will Europe choose to make it happen?
Петр и Мазепа ([syndicated profile] petrimazepa_feed) wrote2025-08-22 03:00 pm
КРИМський бандерівець ([syndicated profile] crimea_ua_feed) wrote2025-08-22 01:50 pm

Повномасштабне вторгнення росії. Випуск #1275 за 22.08.2025 «Мы ждали россию, а она пришла и отобрал

Posted by КРИМський бандерівець

1.     Додому вдалося повернути 65 українців, яких рашисти депортували до кордону з Грузією. Серед українців – 8 важкохворих, засуджених в рф, та ті, кого вважали зниклими безвісти. Окупанти залишили їх у буферній зоні без документів, їжі та медичної допомоги. За підтримки міжнародних організацій усіх вдалося повернути в Україну. Загалом за останні місяці з контрольно-пропускного пункту […]
gracheeha: (Default)
gracheeha ([personal profile] gracheeha) wrote2025-08-22 09:58 am

"A Republic, if you can keep it"

"FBI raids home of former Trump national security adviser John Bolton.  The search comes as part of a probe into the Trump critic’s handling of classified information." 
www.wsj.com/politics/policy/fbi-raids-home-of-former-trump-national-security-adviser-john-bolton-48f9dbc2
 
PS Я терпеть не могла Болтона во времена неоконов.
Schneier on Security ([syndicated profile] bruce_schneier_feed) wrote2025-08-22 11:04 am

AI Agents Need Data Integrity

Posted by Bruce Schneier

Think of the Web as a digital territory with its own social contract. In 2014, Tim Berners-Lee called for a “Magna Carta for the Web” to restore the balance of power between individuals and institutions. This mirrors the original charter’s purpose: ensuring that those who occupy a territory have a meaningful stake in its governance.

Web 3.0—the distributed, decentralized Web of tomorrow—is finally poised to change the Internet’s dynamic by returning ownership to data creators. This will change many things about what’s often described as the “CIA triad” of digital security: confidentiality, integrity, and availability. Of those three features, data integrity will become of paramount importance.

When we have agency in digital spaces, we naturally maintain their integrity—protecting them from deterioration and shaping them with intention. But in territories controlled by distant platforms, where we’re merely temporary visitors, that connection frays. A disconnect emerges between those who benefit from data and those who bear the consequences of compromised integrity. Like homeowners who care deeply about maintaining the property they own, users in the Web 3.0 paradigm will become stewards of their personal digital spaces.

This will be critical in a world where AI agents don’t just answer our questions but act on our behalf. These agents may execute financial transactions, coordinate complex workflows, and autonomously operate critical infrastructure, making decisions that ripple through entire industries. As digital agents become more autonomous and interconnected, the question is no longer whether we will trust AI but what that trust is built upon. In the new age we’re entering, the foundation isn’t intelligence or efficiency—it’s integrity.

What Is Data Integrity?

In information systems, integrity is the guarantee that data will not be modified without authorization, and that all transformations are verifiable throughout the data’s life cycle. While availability ensures that systems are running and confidentiality prevents unauthorized access, integrity focuses on whether information is accurate, unaltered, and consistent across systems and over time.

It’s a new idea. The undo button, which prevents accidental data loss, is an integrity feature. So is the reboot process, which returns a computer to a known good state. Checksums are an integrity feature; so are verifications of network transmission. Without integrity, security measures can backfire. Encrypting corrupted data just locks in errors. Systems that score high marks for availability but spread misinformation just become amplifiers of risk.

All IT systems require some form of data integrity, but the need for it is especially pronounced in two areas today. First: Internet of Things devices interact directly with the physical world, so corrupted input or output can result in real-world harm. Second: AI systems are only as good as the integrity of the data they’re trained on, and the integrity of their decision-making processes. If that foundation is shaky, the results will be too.

Integrity manifests in four key areas. The first, input integrity, concerns the quality and authenticity of data entering a system. When this fails, consequences can be severe. In 2021, Facebook’s global outage was triggered by a single mistaken command—an input error missed by automated systems. Protecting input integrity requires robust authentication of data sources, cryptographic signing of sensor data, and diversity in input channels for cross-validation.

The second issue is processing integrity, which ensures that systems transform inputs into outputs correctly. In 2003, the U.S.-Canada blackout affected 55 million people when a control-room process failed to refresh properly, resulting in damages exceeding US $6 billion. Safeguarding processing integrity means formally verifying algorithms, cryptographically protecting models, and monitoring systems for anomalous behavior.

Storage integrity covers the correctness of information as it’s stored and communicated. In 2023, the Federal Aviation Administration was forced to halt all U.S. departing flights because of a corrupted database file. Addressing this risk requires cryptographic approaches that make any modification computationally infeasible without detection, distributed storage systems to prevent single points of failure, and rigorous backup procedures.

Finally, contextual integrity addresses the appropriate flow of information according to the norms of its larger context. It’s not enough for data to be accurate; it must also be used in ways that respect expectations and boundaries. For example, if a smart speaker listens in on casual family conversations and uses the data to build advertising profiles, that action would violate the expected boundaries of data collection. Preserving contextual integrity requires clear data-governance policies, principles that limit the use of data to its intended purposes, and mechanisms for enforcing information-flow constraints.

As AI systems increasingly make critical decisions with reduced human oversight, all these dimensions of integrity become critical.

The Need for Integrity in Web 3.0

As the digital landscape has shifted from Web 1.0 to Web 2.0 and now evolves toward Web 3.0, we’ve seen each era bring a different emphasis in the CIA triad of confidentiality, integrity, and availability.

Returning to our home metaphor: When simply having shelter is what matters most, availability takes priority—the house must exist and be functional. Once that foundation is secure, confidentiality becomes important—you need locks on your doors to keep others out. Only after these basics are established do you begin to consider integrity, to ensure that what’s inside the house remains trustworthy, unaltered, and consistent over time.

Web 1.0 of the 1990s prioritized making information available. Organizations digitized their content, putting it out there for anyone to access. In Web 2.0, the Web of today, platforms for e-commerce, social media, and cloud computing prioritize confidentiality, as personal data has become the Internet’s currency.

Somehow, integrity was largely lost along the way. In our current Web architecture, where control is centralized and removed from individual users, the concern for integrity has diminished. The massive social media platforms have created environments where no one feels responsible for the truthfulness or quality of what circulates.

Web 3.0 is poised to change this dynamic by returning ownership to the data owners. This is not speculative; it’s already emerging. For example, ActivityPub, the protocol behind decentralized social networks like Mastodon, combines content sharing with built-in attribution. Tim Berners-Lee’s Solid protocol restructures the Web around personal data pods with granular access controls.

These technologies prioritize integrity through cryptographic verification that proves authorship, decentralized architectures that eliminate vulnerable central authorities, machine-readable semantics that make meaning explicit—structured data formats that allow computers to understand participants and actions, such as “Alice performed surgery on Bob”—and transparent governance where rules are visible to all. As AI systems become more autonomous, communicating directly with one another via standardized protocols, these integrity controls will be essential for maintaining trust.

Why Data Integrity Matters in AI

For AI systems, integrity is crucial in four domains. The first is decision quality. With AI increasingly contributing to decision-making in health care, justice, and finance, the integrity of both data and models’ actions directly impact human welfare. Accountability is the second domain. Understanding the causes of failures requires reliable logging, audit trails, and system records.

The third domain is the security relationships between components. Many authentication systems rely on the integrity of identity information and cryptographic keys. If these elements are compromised, malicious agents could impersonate trusted systems, potentially creating cascading failures as AI agents interact and make decisions based on corrupted credentials.

Finally, integrity matters in our public definitions of safety. Governments worldwide are introducing rules for AI that focus on data accuracy, transparent algorithms, and verifiable claims about system behavior. Integrity provides the basis for meeting these legal obligations.

The importance of integrity only grows as AI systems are entrusted with more critical applications and operate with less human oversight. While people can sometimes detect integrity lapses, autonomous systems may not only miss warning signs—they may exponentially increase the severity of breaches. Without assurances of integrity, organizations will not trust AI systems for important tasks, and we won’t realize the full potential of AI.

How to Build AI Systems With Integrity

Imagine an AI system as a home we’re building together. The integrity of this home doesn’t rest on a single security feature but on the thoughtful integration of many elements: solid foundations, well-constructed walls, clear pathways between rooms, and shared agreements about how spaces will be used.

We begin by laying the cornerstone: cryptographic verification. Digital signatures ensure that data lineage is traceable, much like a title deed proves ownership. Decentralized identifiers act as digital passports, allowing components to prove identity independently. When the front door of our AI home recognizes visitors through their own keys rather than through a vulnerable central doorman, we create resilience in the architecture of trust.

Formal verification methods enable us to mathematically prove the structural integrity of critical components, ensuring that systems can withstand pressures placed upon them—especially in high-stakes domains where lives may depend on an AI’s decision.

Just as a well-designed home creates separate spaces, trustworthy AI systems are built with thoughtful compartmentalization. We don’t rely on a single barrier but rather layer them to limit how problems in one area might affect others. Just as a kitchen fire is contained by fire doors and independent smoke alarms, training data is separated from the AI’s inferences and output to limit the impact of any single failure or breach.

Throughout this AI home, we build transparency into the design: The equivalent of large windows that allow light into every corner is clear pathways from input to output. We install monitoring systems that continuously check for weaknesses, alerting us before small issues become catastrophic failures.

But a home isn’t just a physical structure, it’s also the agreements we make about how to live within it. Our governance frameworks act as these shared understandings. Before welcoming new residents, we provide them with certification standards. Just as landlords conduct credit checks, we conduct integrity assessments to evaluate newcomers. And we strive to be good neighbors, aligning our community agreements with broader societal expectations. Perhaps most important, we recognize that our AI home will shelter diverse individuals with varying needs. Our governance structures must reflect this diversity, bringing many stakeholders to the table. A truly trustworthy system cannot be designed only for its builders but must serve anyone authorized to eventually call it home.

That’s how we’ll create AI systems worthy of trust: not by blindly believing in their perfection but because we’ve intentionally designed them with integrity controls at every level.

A Challenge of Language

Unlike other properties of security, like “available” or “private,” we don’t have a common adjective form for “integrity.” This makes it hard to talk about it. It turns out that there is a word in English: “integrous.” The Oxford English Dictionary recorded the word used in the mid-1600s but now declares it obsolete.

We believe that the word needs to be revived. We need the ability to describe a system with integrity. We must be able to talk about integrous systems design.

The Road Ahead

Ensuring integrity in AI presents formidable challenges. As models grow larger and more complex, maintaining integrity without sacrificing performance becomes difficult. Integrity controls often require computational resources that can slow systems down—particularly challenging for real-time applications. Another concern is that emerging technologies like quantum computing threaten current cryptographic protections. Additionally, the distributed nature of modern AI—which relies on vast ecosystems of libraries, frameworks, and services—presents a large attack surface.

Beyond technology, integrity depends heavily on social factors. Companies often prioritize speed to market over robust integrity controls. Development teams may lack specialized knowledge for implementing these controls, and may find it particularly difficult to integrate them into legacy systems. And while some governments have begun establishing regulations for aspects of AI, we need worldwide alignment on governance for AI integrity.

Addressing these challenges requires sustained research into verifying and enforcing integrity, as well as recovering from breaches. Priority areas include fault-tolerant algorithms for distributed learning, verifiable computation on encrypted data, techniques that maintain integrity despite adversarial attacks, and standardized metrics for certification. We also need interfaces that clearly communicate integrity status to human overseers.

As AI systems become more powerful and pervasive, the stakes for integrity have never been higher. We are entering an era where machine-to-machine interactions and autonomous agents will operate with reduced human oversight and make decisions with profound impacts.

The good news is that the tools for building systems with integrity already exist. What’s needed is a shift in mind-set: from treating integrity as an afterthought to accepting that it’s the core organizing principle of AI security.

The next era of technology will be defined not by what AI can do, but by whether we can trust it to know or especially to do what’s right. Integrity—in all its dimensions—will determine the answer.

Sidebar: Examples of Integrity Failures

Ariane 5 Rocket (1996)
Processing integrity failure
A 64-bit velocity calculation was converted to a 16-bit output, causing an error called overflow. The corrupted data triggered catastrophic course corrections that forced the US $370 million rocket to self-destruct.

NASA Mars Climate Orbiter (1999)
Processing integrity failure
Lockheed Martin’s software calculated thrust in pound-seconds, while NASA’s navigation software expected newton-seconds. The failure caused the $328 million spacecraft to burn up in the Mars atmosphere.

Microsoft’s Tay Chatbot (2016)
Processing integrity failure
Released on Twitter, Microsoft‘s AI chatbot was vulnerable to a “repeat after me” command, which meant it would echo any offensive content fed to it.

Boeing 737 MAX (2018)
Input integrity failure
Faulty sensor data caused an automated flight-control system to repeatedly push the airplane’s nose down, leading to a fatal crash.

SolarWinds Supply-Chain Attack (2020)
Storage integrity failure
Russian hackers compromised the process that SolarWinds used to package its software, injecting malicious code that was distributed to 18,000 customers, including nine federal agencies. The hack remained undetected for 14 months.

ChatGPT Data Leak (2023)
Storage integrity failure
A bug in OpenAI’s ChatGPT mixed different users’ conversation histories. Users suddenly had other people’s chats appear in their interfaces with no way to prove the conversations weren’t theirs.

Midjourney Bias (2023)
Contextual integrity failure
Users discovered that the AI image generator often produced biased images of people, such as showing white men as CEOs regardless of the prompt. The AI tool didn’t accurately reflect the context requested by the users.

Prompt Injection Attacks (2023–)
Input integrity failure
Attackers embedded hidden prompts in emails, documents, and websites that hijacked AI assistants, causing them to treat malicious instructions as legitimate commands.

CrowdStrike  Outage (2024)
Processing integrity failure
A faulty software update from CrowdStrike caused 8.5 million Windows computers worldwide to crash—grounding flights, shutting down hospitals, and disrupting banks. The update, which contained a software logic error, hadn’t gone through full testing protocols.

Voice-Clone Scams (2024)
Input and processing integrity failure
Scammers used AI-powered voice-cloning tools to mimic the voices of victims’ family members, tricking people into sending money. These scams succeeded because neither phone systems nor victims identified the AI-generated voice as fake.

This essay was written with Davi Ottenheimer, and originally appeared in IEEE Spectrum.