The most invasive but effective way I've found to disable Defender is to boot into a live Linux USB, rename "C:\ProgramData\Microsoft\Windows Defender", and create an empty file in its place.
Group policies still work so effectively that I've set up a local domain using a controller in my homelab that does nothing but change the defender policies automatically for all users.
I thought so too, but if you switch everything off (including Tamper Protection) in the UI, then turn it off via (local!) Group Policy, it sticks. I’ve set up a few Windows 10 22H2 & 11 24H2 test VMs this way and they still have Defender disabled.
(I think you need to disable Tamper Protection first, otherwise you later get a threat detected of “WinDefendDisable”, but if you allow/unquarantine it doesn’t auto-enable again)
With Linux, there's often a good clean way to do a thing, and then there are weird hacks.
On Windows, it often starts with weird hacks, as Microsoft is further enclosing its ecosystem.
(I use Windows mostly for gaming and VR, and still have to constantly fiddle with the system to keep it working on a basic level, sad face emoji. Who would've thunk that merely playing a 8K European documentary in VR would require configuring DirectShow filters found on GitHub.)
via Virtual Desktop I suppose. So mpv would do all of the video stuff and then would blit a SbS video onto VD, and VD would warp the two halves on a spherical surface?
The 'weird hack' is actually just a normal option left hanging in Defender options that clearly states it will prevent "other" stuff from changing Defender settings
To prepare Win11 Enterprise edition image for distribution, I run ~200 lines long powershell script, nuking every bloatware MS puts into Win.
It's ridiculous.
Linux distro devs, working for free, pushing excellent product can't compare with these clowns in high-paying jobs at Microsoft, pretending they're working.
I found that this script broke Win+R Run dialog history by setting Start_TrackProgs. This was undocumented, and I had to disable it manually. (Worse yet, it doesn't show up on GitHub search because the .reg files are UTF-16.)
its been disabled. defender group policy auto re-enabling is readily reproducible. i have a screenshot showing defender detecting the group policy change as a malware detection.
any control you think you have over windows is imaginary.
I have yet to see concrete evidence that disabling Windows update and windows defender would elevate risk of having the system compromised in any meaningful way.
I installed Windows 10 2016 ltsc on a VM at the end of last year out of curiosity to test that. Disabled wupdate and defender before letting it access the internet so that it was basically 8 years behind on any updates. I tried browsing all kinds of sketchy sites with Firefox and chrome, clicking ads etc. but wasn't able to get the system infected.
I would guess that keeping your browser updated is more important.
Correct! The browser is now the key vector because it's the most promiscuous and lascivious-for-code-and-data software on most devices.
Browser-zero days are why I factored out a way to distribute "web RPA agent creation" on any device, with no download - into its own product layer for browser-isolation. It's a legitimate defense layer but main barriers to adoption are operating friction, even tho it makes the task of hackers who want to compromise your network with browser 0-days much harder.
Because of that the RBI aspect is not as popular as ways its being used where you need a really locked down browser, with policies for preventing upload/download, even copy and paste, etc - for DLP (data loss prevention), for regulated enterprises.
Even so I think the potential applications of this tech layer are just starting.
> I have yet to see concrete evidence that disabling Windows update and windows defender would elevate risk of having the system compromised in any meaningful way.
It’s much less likely than it was 20 years ago. A lot of attack vectors have already been fixed. But hypothetically a bug in the network stack could still leave an internet connected machine vulnerable.
I use stock Win7 SP1 with just a couple updates (recently TLS and SHA-512, but only 27 hotfixes in total) and the only way to break something is if I deliberately run unverified executables that were manually downloaded from untrusted sources. And since I don't do this - my machine is still running the same installation that I did on December 24th 2014.
> browsing all kinds of sketchy sites with Firefox and chrome
How did you install those - downloaded via another system? Because with that old system, you are missing ssl certificates (Firefox and Chrome bring their own).
It would make sense if the cost/danger for the thieves to check every door would be prohibitive. Unfortunately, with networked computers, checking the doors is usually both riskless and effectively free.
There are still active attacks against DOS and Win98. Automated driveby attacks, just looking to increase the size of a bot farm. There are still new exploits being released against rather old systems.
You attack the networking stacks for it, those are still actively developed (mTCP was last updated Jan 2025) as businesses use networked DOS for quite a few things. A DOS networking stack consists of a packet driver, a NIC driver, and a protocol library. All of those have attack surface. NIC drivers in particular often haven't really had updates since they were first released. Because for hardware manufacturers of the time the goal was on getting people to use the hardware, not on supporting them. There are newer DOS NIC drivers than you'd think too. Realtek last I checked still makes and supports an ISA NIC.
So you are not talking about attacking old code at all, but networking stacks that are indeed actively developed? That feels like a very different ball game from attacking Win98, even if the platform they are running on top of is old.
It's a complicated space. There are attacks on both maintained and unmaintained stacks. There are definitely attacks against windows 95/98 too because people have things like mills or other industrial automation that are powered by those OSes still connected to the internet. There is a lot of SCADA[1] too that fits that bill. It's easy to think "but why wasn't this replaced!" and the answer is almost always "cost or process certification". If the operator is lucky and has good networking folks all of this is in a very very well firewalled VLAN. But, never underestimate the amount of people that are not that savvy and just have it plugged into the internet.
For anyone saying these aren't targets, no they are probably already hacked. These are the things that keep the national security folks up at night knowing an adversary has them already backdoored and set up for take down. Moreover if they execute on that they would go for maximum damage first to either create chaos, or prevent the system from being repaired easily.
Would suck if an exploit was present for years, sometimes decades. Would especially suck if people piled up old exploits and fell back on them as needed.
Everything was a zero-day at one point in time. The effort is indeed usually put in whilst it is the current version. But retying all old malware isn't effort; it is more or less the definition of script-kiddy (though state level attackers will do it too).
Those of us who actually do this stuff for a living still routinely see probes for Slammer, Zotob, Blaster and more from when we booted our computers by rubbing two sticks together.
Actually riddle me this: what if you want to exploit exactly the type of person to disable updates? They are potentially more lucrative targets if nobody else targets them. Just a thought. It's sort of how "delete me" services profit off paranoia, they're a lucrative market because of the paranoia.
This is about the binaries. I first tried renaming the folder in Program Files, but Defender still kept eating RAM and CPU resources which were scarce on a 12-year-old laptop.
It needs to be closer to where the acronym is first introduced. The definition, on my screen, is below the fold so it can not be seen in context of where the acronym is first introduced. If it was defined below the title, I would understand.
This is a somewhat useful feedback, however I am not too sure how this can be fixed given the structure of my blog post. Do you think if I just add a line `*WSC is short for Windows Security Center` in the first paragraph this will be enough?
In this post I will briefly describe the journey I went through while implementing defendnot, a tool that disables Windows Defender by using the Windows Security Center (WSC) service API directly.
Or use the abbr (and its title attribute) that was designed for that purpose; no extraneous "flow" breaking required. Mobile people can long press on the indicator to read more, everyone who magically knew what WSC gets to continue to know what WSC means
The typical solution, is to include the expansion in brackets after the first use.
Simple rule I learned on my Electronic Engineering degree (where we're guilty of many, many acronyms): When you write an acronym/initialism in a paper (or anywhere for others to read reall), assume the reader doesn't know what it stands for and include the expansion in brackets immediately after the first use.
EDIT: As my sibling comment also suggests, writing it in full the first time, and using the acronym/initialism in brackets is also acceptable.
> the project blew up quite a bit and gained ~1.5k stars, after that the developers of the antivirus I was using filed a DMCA takedown request
I got really, really confused after that statement, because I don't understand what "the antivirus I was using" means and why they would have a reason to send the author a DMCA.
I think it means the author reverse-engineered another antivirus and put parts of it in their open-source project. But it could also mean other things. Skimming I see a heading with "Impersonating WinDefend".
So is the jist that the author somehow broke some kind of copyright law?
My understanding is he used the carcass of another AV tool to bypass signature requirements which is understandably grey (there's an argument for it being transformative, IMO but IANAL).
yes, they broke copywright law by copying part of an existing AV program.
From the paragraph directly before the one you quoted:
The way how my project worked is that it was using a thirdparty code from some already existing antivirus and forced that av to register the antivirus in WSC.
Using the macros in the second linked file, this expands to:
auto _defer_instance_1234 = Defer{} % [&]()->void { CoUninitialize(); };
* The 1234 is whatever the line number is, which makes the variable name unique.
* auto means infer the type of this local variable from the expression after the =.
* Defer{} means default construct a Defer instance. Defer is an empty type, but it allows the % following it to call a specific function because...
* Defer has an overloaded operator%. It's a template function, which takes a callable object (type is the template parameter Callable) and returns a DeferHolder<Callable> instance.
* [&]()->void { /*code here*/ }; is C++ syntax for a lambda function that captures any variables it uses by address (that's the [&] bit), takes no parameters (that's the () bit) and returns nothing (that's the ->void bit). The code goes in braces.
* DeferHolder calls the function it holds when it is destroyed.
It's subjective but some (including me!) would say it's cursed because it's using a macro to make something that almost looks like C++ syntax but isn't quite. I'm pretty confident with C++ but I had no idea what was going on at first (except, "surely this is using macros somehow ... right?"). [Edit: After some thought, I think the most confusing aspect is that defer->void looks like a method call through an object pointer rather than a trailing return type.]
I'd say it would be better to just be honest about its macroness, and also just do the extra typing of the [&] each time so the syntax of the lambda is all together. (You could then also simplify the implementation.) You end up with something like this:
DEFER([&]()->void { CoUninitialize(); });
Or if you go all in with no args lambda, you could shorten it to:
That's interesting! So i assume that this macro allows code to get registered to be run after the 'current' scope exits.
But from my understanding (or lack thereof), the `auto _defer_instance_1234 =` is never referenced post construction. Why doesn't the compiler immediately detect that this object is unused and thus optimize away the object as soon as possible? Is it always guaranteed that the destructor gets called only after the current scope exits?
> Why doesn't the compiler immediately detect that this object is unused and thus optimize away the object as soon as possible? Is it always guaranteed that the destructor gets called only after the current scope exits?
Yes, exactly. The destructor is allowed to have some visible side effect such as closing a file handle or unlocking a mutex that could violate the assumption of the code in that block. (Even just freeing some memory could be an issue for code in the block.) It is guaranteed that the destructor is closed at the end of the block, and that all the destructors called in that way happen in reverse order to the order of their corresponding constructors.
I don't think we actually need `->void` -- shouldn't the compiler be able to infer the return type (or rather, absence thereof)? My experience is that the compiler only struggles when the return value needs to be implicitly converted to some other type.
Would it have looked any less cursed if it just read `defer { CoUninitialize(); };`?
Agreed that the simplest "fix" would be to just rename the macro to be all-caps.
> Would it have looked any less cursed if it just read `defer { CoUninitialize(); };`?
It's subjective but personally I still hate it.
> Agreed that the simplest "fix" would be to just rename the macro to be all-caps.
Actually I think the bigger part of my suggestion is switching from an object-like macro to a function-like macro [1], which makes it all a bit less magical.
And, I personally hate macros that pretend to be functions but provide no visual indicator that they're not actually functions. For instance, `#define min(x, y) (x < y ? x : y)` evaluates its args multiple times. It's a little less bad when it only takes a single argument, but I am still irritated by things like htonl.
I think the "best" approach here would be to make it a function-like macro, and also change the name to all caps.
(Also, I tend to agree that `defer { ... };` is still cursed -- it requires the trailing semicolon, which further breaks the illusion of a keyword that takes a block scope.)
> * Defer has an overloaded operator%. It's a template function, which takes a callable object (type is the template parameter Callable) and returns a DeferHolder<Callable> instance.
Is there any reason to use operator% instead of a normal method call? Except possibly looking cool, which doesn't seem useful given that the call is hidden away in a macro anyway.
If you used a normal method call then there would need to be a corresponding close bracket at the end of the overall line of code, after the end of the lambda function. But the macro ("defer") only occurs at the start of the line, so it has no way to supply that close bracket. So the caller of the macro would have to supply it themselves. As I mentioned near the end of my comment, it seems like the defer macro is specifically engineered to avoid the caller needing a close bracket.
If you don't mind that, I said that you can "simplify the implementation" - what I meant was, as you say, you don't need the overloaded Defer::operator% (or indeed the Defer class at all). Instead you could do:
Eh, there are better implementations that are less syntactically obtuse (no ->void) but other than that it’s fine. Fairly obvious what it’s supposed to do and I’ve needed similar things in the past. There’s a cppcon talk that use ->* operator for precedence reasons and the macro lets you use it like ‘defer { … };’
This is a class which implements a 'defer' mechanism, similar to Go and Javascript constructs, which do the same thing - delay execution of the given block until the current block scope is exited. Its pretty clever, actually, and quite useful.
I personally don't find it that cursed, but for many old C++ heads this may be an overwhelming smell - adding a class to implement what should be a language feature may tweak some folks' ideology a bit too far.
yeah sorry i didnt feel like implementing my own RAII stuff for all the COM thingies due to time constraints. it will be changed in the next update though
Honestly if this isn't part of a public API this isn't very cursed in terms of C++, especially if you have a lot of one-off cleanup operations.
I think the only bit I don't like personally is the syntax. I normally implement defer as a macro to keep things clean. If done correctly it can look like a keyword: `defer []{ something(); };`.
I think the syntax is exactly why they're saying it's cursed. IMO your suggestion is no better - yes it makes defer look like a keyword, but it's not! As I said in a sibling comment, I think it's clearer if you're honest that you're using a macro: DEFER([](){something();});
Or you could even make a non-macro version (but then you need to think of variable names for each defer):
auto defer_uninitialise = do_defer([](){CoUninitialize();});
Sure, I've used __LINE__ for this before too, and yeah I agree that my keyword construction was too clever (seemed cool at the time, since the macro had a dangling = at the end to make it work).
Which looks much nicer. The preprocessor treats balanced curlies as one single token regardless of how many lines it spans, precisely to enable this usage.
The code defers a function call until the point in time that an object goes out of scope. The implementation uses C macros to create a more succinct syntax that omits parts of the necessary C lambda/unnamed function definition and to create a unique variable name for managing the deferred function call. However, the resulting syntax eschews the common convention of using UPPER CASE to denote C macros, and instead appears similar at first glance to a function call from an object pointer.
This can cause confusion if one is not familiar with this pattern and expects macros to be communicated differently. Some commenters say this is common enough, or useful enough to them, to be considered almost idiomatic in some contexts.
lol, i significantly improved my vacation by reverse engineering the virtual desktops on windows :)
best memories of last year: reverse engineering is hellovafun!
Yes, a small number, but it changes each year due to AV vendors (including Microsoft) changing how their AV works. It also depends on whether one looks at the impact from passively running the antivirus vs actively running a scan.
I understand and mostly support the idea of mandatory AV for the people who can barely handle the concept of a file system.
There is also a class of user forged in the fires of the primordial internet who would never in a trillion years be tricked into clicking a fake explorer.exe window in their browser.
Giving users choice is the best option. Certainly, make it very hard to disable the AV. But, don't make me go dig through DMCA'd repos and dark corners of the internet (!) to find a way to properly disable this bullshit.
Skilled in what exactly? In x-raying all data storages on a system with a naked eye and spotting there a malware? In sniffing ether around the system and smelling a malicious bits on the radio spectrum coming in? How does this skill works?
I've been using computers for 40 years, have never installed and have always disabled malware scanners, and never had a virus. Maybe I'm special. But I'm not that special. There are 3 billion Android uses in the world, almost all of them don't have malware scanners, and almost all of them have never been infected by a malware. Ditto iPhone users.
To be fair, I haven't used Windows for the latter 1/2 of that 40 years. So maybe it's only Windows users who need to go around x-raying all data storages.
I've used computers a bit less since 90s, and I'm also careful not to do dumb stuff on it. But I can't guarantee that any of any PCs at any time is virus free, because I don't know it and can't know it. And that includes Linux btw, though statistically it is much safer. But Linux is beside the point, the whole topic is about removing a Windows component, and on Windows there are millions of different malware.
> There is also a class of user forged in the fires of the primordial internet who would never in a trillion years be tricked into clicking a fake explorer.exe window in their browser.
Until they've had a couple drinks. Might still need a more sophisticated fake than that, but they exist. I'm with you on the disabling part though: I think Apple gets it right with SIP, it takes a reboot in recovery mode to disable it temporarily and a single command while in recovery mode to make it permanent.
Ah yes, I have my Windows power user bingo card dusted off! So far in this thread I’ve got:
- Antivirus software is malware
- We have to disable Windows Updates because I didn’t like them 30 years ago
- Windows Defender hogs resources, laptop reviews showing Windows systems getting 10 hours of web browsing battery life are lying, Windows Defender actually ruins the performance of your computer
- It’s better to complain constantly about Windows and spend hours disabling functionality rather than switch to Linux
I’m just waiting for “Windows sucks I’m thinking about switching to Linux but never end up doing it” and I’ll have a bingo!
>Windows Defender hogs resources, laptop reviews showing Windows systems getting 10 hours of web browsing battery life are lying, Windows Defender actually ruins the performance of your computer
There are definitely times when I wish I could disable it outright. Often someone will want my help reviving an old computer or laptop and it'll have to sit for a day in a loop of windows update fighting windows defender for resources with neither of them making much headway before one or the other will finish enough to let the other run for a bit.
We use some software that stores each record in a separate file; basically using the filesystem as a database.
Without adding an exception to Windows defender, that software is unusably slow. Once the exception is added (or defender is turned off) the software is nice and fast again.
The solution there is adding the exception, not turning off Defender, especially when you don't have control over what other activities may take place on the system.
Exceptions are valid when scoped to a container where you reasonably expect to be the sole user of the data therein and it contains no executable code.
I honestly have never seen Defender behave with exceptions properly. Sometimes it does, sometimes it doesn't. Seems to depend on whether the day starts with a T.
While your first statement is reasonable, your second is uncharitable and hostile.
If Windows won't allow use of the filesystem as a database or cannot heuristically detect when a folder is being used as a store of data, Windows is wrong, not the developer.
Amusingly Microsoft ships exclusions for their own software, and states "Opting out of automatic exclusions might adversely impact performance, or result in data corruption. Automatic server role exclusions are optimized for Windows Server 2016, Windows Server 2019, Windows Server 2022, and Windows Server 2025."
> If Windows won't allow use of the filesystem as a database or cannot heuristically detect when a folder is being used as a store of data, Windows is wrong, not the developer.
I guess Nintendo is wrong for not giving you a file system at all on the Game Boy. The analogy may be extreme but that’s part of the point here: who are we to dictate Microsoft’s design goals and choice of compromises?
It’s really not Microsoft’s fault if their product doesn’t meet the specific needs of someone’s specific software use case.
I do agree that my first suggestion is the more sensible one, but my second one was more of a philosophical point. Windows has been the same old Windows for a long time and developers that don’t understand its limitations and requirements for deploying applications are more in the wrong than Microsoft in this scenario.
If Microsoft felt like the best design decision was to remove windows defender and
that there was no negative impact to doing so they would have done it by now.
I'm sorry, but I believe your argument is extremely weak.
The Nintendo/Game Boy analogy doesn’t hold water. Nintendo doesn’t give you a filesystem on the Game Boy, but it certainly doesn’t stop you from implementing one yourself. Nintendo doesn’t include a filesystem because that’s not part of the Game Boy’s platform model; it’s a console with fundamentally different goals and constraints. If you require a virtual filesystem to load assets for your game, Nintendo _will not_ slow your cartridge ROM down.
Windows, on the other hand, has always shipped with a general-purpose filesystem and encourages developers to use it for data persistence, caching, configuration, and more. In fact, the Win32 API is deeply file-centric. Even the OS has its own hidden virtual filesystem.
Windows is a Unix-inspired CP/M derivative, and both lineages are strongly file-based. In fact, when Windows tried to replace the filesystem with a database in Longhorn, they failed spectacularly, and only a few pieces of that design are left today. What still exists, however, is a filesystem optimized for storing files.
Suggesting that developers are "in the wrong" for relying on the filesystem on an OS that has always promoted it is like blaming drivers for expecting roads to be usable. We’ve been building software on Windows that reads and writes files for decades, with Microsoft’s full blessing.
If Defender or related tooling starts punishing valid, decades-old patterns like using a folder as a key-value store, that’s not a failure of developers to "understand Windows". It’s a regression in the OS, or at least a poor balance of heuristics.
We absolutely should question Microsoft’s design goals if they break longstanding, legitimate use cases without offering workable alternatives. Being dominant doesn’t make them immune to critique, especially when their changes have real-world consequences for maintainable, cross-platform software using well-established techniques.
Why blame me? I didn't write the software, I only use it. But yes, I consider it badly written software due to that design. I would use SQLite for that particular use case. It would make the programming easier and more performant.
That “architecture of the parent OS” is so shitty they had to introduce a first party “Dev Drive” mode to disable said architecture wholesale so that developer workflows aren’t crippled. Think about that.
I assume you either don’t really know what you’re talking about, or are arguing in bad faith.
Oh, and people develop software for a living and sometimes that involves making sure the software works on Windows. Not everyone complaining is using Windows by choice.
In what universe is windows defender “resource-crippling?” There are windows laptops that will sip battery for an entire workday plus extra hours while running defender the entire time. So clearly it’s not “resource-crippling” if it can run on a laptop with a single digit wattage power draw.
And then we’ve got the “I need to control my system I’m too smart for antivirus” folks all over this thread.
Well, if you’re so smart why are you using a consumer OS designed for idiots?
(I like OP’s tongue-in-cheek work and post a whole lot better than the neckbeard army describing how Windows is broken and totally doesn’t work and how we have to disable updates and antivirus because we are power users I guess so we just do that for no reason)
> In what universe is windows defender “resource-crippling?”
This one? Not all of us want to throw perfectly usable hardware in the e-waste pile. Windows 10 was perfectly fine on my old Haswell miniPC, save for Defender wasting CPU cycles and IO doing..."checks".
Let’s cut the bullshit, Defender is basically unchanged as a concept since Windows Vista or maybe even Windows XP. It runs completely fine on 15 year old hardware.
We are in the “Windows users complain endlessly and refuse to switch to Linux” bingo card right now. Windows has been this way since before you bought that mini PC.
> Let’s cut the bullshit, Defender is basically unchanged as a concept since Windows Vista or maybe even Windows XP. It runs completely fine on 15 year old hardware.
Exactly. It's the same legacy scan every fucking thing you open AV architecture.
Back in the day of spinning disks it probably wouldn't have been too noticeable for the AV to marshal scanning to its usermode service and the filesystem to pull the data from cache for the original request afterwards. However now that we have 10GB/s+ capable SSDs the factor of slowdown is exponentially larger.
I can run ripgrep on a massive directory, make myself a cup of tea and return to it still searching for matches versus being done in < 10 seconds with defender disabled.
Yeah so like, every time I ran AV software it was quite obvious where the paranoia settings were, and how to tone down the aggressive "scan everything everywhere every time" settings.
For 98% of systems, there is probably no reason to scan every file on opening it. If people have enabled that setting, or left that default on, then that's their problem; it's not Windows Defender's fault.
My current AV dashboards are screaming at me that I'm only 35% protected. That's because I've exercised a lot of prudence in enabling paranoid settings, based on my rather limited and simplistic threat modeling. Installing AV software comes with the understanding that it can steal resources, but they nearly always have plenty of settings that can be disabled and win back your system responsiveness.
I am beginning to believe that commenters giving bingo-card winnings are not the brightest bulbs in the Windows MCSE pool, honestly. I can relate: Linux and Unix admin in general is far more intuitive and comfortable for me, so I have generally stayed on that side of things, but knowing how to properly set up Windows is an indispensable life skill for anyone.
> If people have enabled that setting, or left that default on, then that's their problem; it's not Windows Defender's fault.
There is no such setting for Defender. The file scanning is either on or defender is completely off. To even access some of the better stuff like ASR rules (that are disabled by default) you need third-party software or pay for their enterprise offering.
Consumer Defender literally has like 4 toggles in total. It's a dumbed down and extremely permissive AV because it runs on every Windows machine.
>In what universe is windows defender “resource-crippling?”
In any universe where you do a lot of small file IO. I'm not saying that other AV isn't far worse, but on access/write/delete AV massively kills performance when you do anything that creates/deletes tons of small files.
If you are a threat actor, you could get lucky and there isn't another Endpoint Detection and Response product installed, which would almost certainly intercept this.
If you are an EDR vendor, this is an obfuscated API call that EDR vendors can use to suppress or disable the Windows Firewall. CrowdStrike for example, can do either I believe, use Windows Firewall or use their implementation.
Well this is a straightforward sentiment with a real "my body, my choice" ring to it, isn't it? Until it isn't.
Perhaps your hardware, when connected to a network, has real effects on the rest of that network. What if your system joined a botnet and began DDOS activities for payment? What if your system was part of a residential proxy network, and could be rented in the grey market for any kind of use or abuse of others' systems? What if your system became a host for CSAM or copyright-violating materials, unbeknownst to you, until the authorities confiscated it?
And what if your hardware had a special privileged location on a corporate network, or you operated a VPC with some valuable assets, and that was compromised and commandeered by a state-level threat actor? Is it still "your hardware, your choice"? Or do your bad choices affect other people as well?
Man that is a silly line of thought. Your conclusion now has to be that all freedom is bad because peoples choices can have ramifications, yeah?
Oh, you chose to buy new shoes even though they were too tight which distracted you for 1 sec in your car on the way home, due to the discomfort, so you hit someone and they died.
Clearly people can not be trusted to buy their own shoes!
Geez what a cluster* of a comment. You mix in a bunch of theoreticals you came up with in 5 seconds that cover different domains and then don't actually go to the effort of critically examining your own statements, which is appreciated and makes for much higher quality comments.
>Perhaps your hardware, when connected to a network, has real effects on the rest of that network. What if your system joined a botnet and began DDOS activities for payment? What if your system was part of a residential proxy network, and could be rented in the grey market for any kind of use or abuse of others' systems?
This at least is "you, affecting others". But the obvious immediate response is that such things done via the network can be mitigated or blocked at the network layer, and indeed must be anyway since attackers are doing such things from across the world 24/7 regardless. I'd fully support ISPs having to throttle or even potentially block-until-fixed any customers who participate in active network attacks, and other parts of the internet throttling or black listing ISPs that refused to cooperate. But making someone deal with the consequences of their choices is no reason to deny them the choices in the first place, given that most of those making such choices are not, in fact, actually going to end up doing any of what you listed.
>What if your system became a host for CSAM or copyright-violating materials, unbeknownst to you, until the authorities confiscated it?
Here (and seriously ZOMG THINK OF THE CHILDREN, lol really? on HN, in 2025?) you veer off into personal consequences to the person making the choice, as opposed to them being part of an attack on others. This is just saying "there could be risks to you if you mess it up!" which is a complete non-statement.
>And what if your hardware had a special privileged location on a corporate network, or you operated a VPC with some valuable assets, and that was compromised and commandeered by a state-level threat actor? Is it still "your hardware, your choice"? Or do your bad choices affect other people as well?
Um. Hello? Why is corporate IT allowing you to BYOD to a special privileged location on the corporate network without even so much as any sort of management agreement or contractual responsibilities? At this point you've veered off the road of reality. Because in actual reality you don't own hardware in special privileged locations or at least don't have full choice over it by your own agreement. And if that's not the case hooboy is there a kind of a lot of other fundamental issues there. That's not an argument for a blanket universal policy.
cost-benefit. the time/electricity/battery/frustration cost of windows defender dwarfs its utility. i’d be better off with some east euro hackerman’s crypto miner running in the background than WSC. at least hackerman knows how to not peg my CPU at 90% while he’s mining his moneros.
I recently read https://nostarch.com/windows-security-internals and this makes it much more relatable. I've know a bit about how alot of this back stuff works in Windows, but the timing is great - the last chapter of that book really goes into the same detail this author went about tokens and sids.
What's worse for me is that the Check Point Harmony does notnutilize the interfaces of Defender crafted for this purpose, but write a knowledge base article to tell the users to disable the Defender themselves.
The dynarec systems in QEMU aren't as efficient as the native dynarec systems in Windows and macOS (Rosetta 2). You can definitely run x86 Windows with UTM, and it works, but the performance characteristics are pretty poor. From a utility perspective, I've found that running an ARM Windows VM and using Windows' dynarec system to run x86 apps, or using WINE (both using native compiled subsystem code) is a much better experience. It's one of those things where it's okay if you need to run a workload in a pinch.
I'm not sure if performance characteristics are part of what the OP considers "sane", but if it is, I get the position.
It depends on what you need though, because arm windows has its own rosetta-like translation and does run x86 applications.
I set up a windows arm inside an UTM VM as a test, then installed visual studio (not code!) which is an x86 application and it was pretty much usable.
The codebase i was working on was complaining about missing some OpenGL parts so I stopped and haven't investigated further (I have x86 boxes for working on it). But depending on your requirements the above setup may be just fine(tm).
Correct me if I'm wrong, but isn't the emulation of an MMU-equipped CPU a fundamentally slow and unoptimizable task? Apple's Rosetta and its Microsoft equivalent only work as fast as they do because they only run userspace code so they don't have to emulate the MMU.
> My current main machine for non-ctf things is an M4Pro MacBook, and usually, when I am going for a CTF I bring an another x86 laptop with me to do some extensive reverse engineering/pwn stuff as it is usually built for the x86 cpus. Emulation would kind of work for this task but it is pretty painful so I just use an another laptop for all the x86 stuff.
Seems like they don't want to bother with emulation when many of the challenges are not compatible with their main computer.
Lmao reverse engineering WSC on vacation sounds like some real dedication - honestly can't tell if that's commitment or just a cry for help. Made me think: if tuning all this stuff gives you a headache, would you rather have max security or just peace of mind and a fast machine?
This is a godsend. I should send you a jar of KimChee for this. Please return to Seoul, and enjoy the sights. South Korea is one of the most beautiful countries in the world. Try to plan into corrispond to either the cherry blossoms falling in the spring, or the leaves falling in the fall.
I think the point is to disable defender: Air-gapped machines, kiosks, industrial applications, and so on, have no need to eat gobs of ram and waste loads of cpu checking the same files over and over again. For other applications, WD provides dubious benefits. It is annoying that there isn't a switch that says "I know how to operate a computer".
Evildoers don't need to bother with this: If they have access at this point you've got other problems.
Microsoft may extend WD to detect/block this vector since it is using undocumented interfaces; Microsoft would absolutely prefer you buy more cores, and if you're not going to do that, collect some additional licensing revenue through some other way.
That is one possible point, but om machines with low memory, (like a lab full of 8Gb potatoes) this is a godsend. These lab PCs are so stripped down, that the only thing using most of the memory is WD.
You should be able to make a normal mode to run full security and a gaming mode just run a semi large game,and yes, this does expose a vulnerability,but it can be easily brought back up.
Oof, really? Haven't really used windows much after 7, but it always seemed to me defender was pretty lightweight. At least compared to all the other products where just opening the UI would lag out the average machine.
The most invasive but effective way I've found to disable Defender is to boot into a live Linux USB, rename "C:\ProgramData\Microsoft\Windows Defender", and create an empty file in its place.
Group policies still work so effectively that I've set up a local domain using a controller in my homelab that does nothing but change the defender policies automatically for all users.
group policy no longer works on win11. updates will reverse it. additionally defender detects turning off realtime monitoring as malware.
Group policies and registry keys are gentle suggestions. Deleting or renaming files is "I wasn't asking, it's my computer not yours" kind of approach.
Oh, I thought the "I wasn't asking" option was to just reimage it with Linux.
…until Windows Update Repair or the like undo your changes.
You can do this to Windows Update too.
Which itself gets repaired by Windows Update Repair.
Either way, removing C:\windows\system32\wua* did it for me
I thought so too, but if you switch everything off (including Tamper Protection) in the UI, then turn it off via (local!) Group Policy, it sticks. I’ve set up a few Windows 10 22H2 & 11 24H2 test VMs this way and they still have Defender disabled.
(I think you need to disable Tamper Protection first, otherwise you later get a threat detected of “WinDefendDisable”, but if you allow/unquarantine it doesn’t auto-enable again)
And yet I have none of these issues on 11 LTSC 24H2? Sounds like you forgot to disable Tamper Protection
As someone who moved to Linux 10 years ago, this comment chain shows Windows became the real hacker distro
In a sense, it has been for a long time.
With Linux, there's often a good clean way to do a thing, and then there are weird hacks.
On Windows, it often starts with weird hacks, as Microsoft is further enclosing its ecosystem.
(I use Windows mostly for gaming and VR, and still have to constantly fiddle with the system to keep it working on a basic level, sad face emoji. Who would've thunk that merely playing a 8K European documentary in VR would require configuring DirectShow filters found on GitHub.)
> Who would've thunk that merely playing a 8K European documentary in VR would require configuring DirectShow filters found on GitHub.
Dios Mio, get mpv, enable gpu-hq
Thanks! How do I run it in VR though? Can't find it in the manual[1]
[1]: https://mpv.io/manual/master/
via Virtual Desktop I suppose. So mpv would do all of the video stuff and then would blit a SbS video onto VD, and VD would warp the two halves on a spherical surface?
Honestly I've never thought about that before.
By doing it slowly they enabling a hacker spirit to evolve, which I’m sure is unintentional.
The 'weird hack' is actually just a normal option left hanging in Defender options that clearly states it will prevent "other" stuff from changing Defender settings
To prepare Win11 Enterprise edition image for distribution, I run ~200 lines long powershell script, nuking every bloatware MS puts into Win. It's ridiculous.
Linux distro devs, working for free, pushing excellent product can't compare with these clowns in high-paying jobs at Microsoft, pretending they're working.
Care to share the powershell script with us?
https://github.com/Raphire/Win11Debloat
I start with Tiny11 first though these days, then run that to get rid of the last few bits.
I found that this script broke Win+R Run dialog history by setting Start_TrackProgs. This was undocumented, and I had to disable it manually. (Worse yet, it doesn't show up on GitHub search because the .reg files are UTF-16.)
its been disabled. defender group policy auto re-enabling is readily reproducible. i have a screenshot showing defender detecting the group policy change as a malware detection.
any control you think you have over windows is imaginary.
Once again: Tamper Protection
It's weird that windows wouldn't have a signed manifest that would detect that
You can also disable Windows Update entirely by taking ownership of wuaueng.dll and .exe. It’s the only effective method on Windows Home.
But disabling updates on the system connected to the Internet is a terrible idea.
How do you update that afterwards?
I have yet to see concrete evidence that disabling Windows update and windows defender would elevate risk of having the system compromised in any meaningful way.
I installed Windows 10 2016 ltsc on a VM at the end of last year out of curiosity to test that. Disabled wupdate and defender before letting it access the internet so that it was basically 8 years behind on any updates. I tried browsing all kinds of sketchy sites with Firefox and chrome, clicking ads etc. but wasn't able to get the system infected.
I would guess that keeping your browser updated is more important.
Correct! The browser is now the key vector because it's the most promiscuous and lascivious-for-code-and-data software on most devices.
Browser-zero days are why I factored out a way to distribute "web RPA agent creation" on any device, with no download - into its own product layer for browser-isolation. It's a legitimate defense layer but main barriers to adoption are operating friction, even tho it makes the task of hackers who want to compromise your network with browser 0-days much harder.
Because of that the RBI aspect is not as popular as ways its being used where you need a really locked down browser, with policies for preventing upload/download, even copy and paste, etc - for DLP (data loss prevention), for regulated enterprises.
Even so I think the potential applications of this tech layer are just starting.
Just the other day I went to a website to flash a new firmware on a zigbee dongle. Straight from a chrome tab. wild!
Then it hit me: the only thing keeping a rogue website from sweeping your entire life is a browser's permissions popup.
> I have yet to see concrete evidence that disabling Windows update and windows defender would elevate risk of having the system compromised in any meaningful way.
It’s much less likely than it was 20 years ago. A lot of attack vectors have already been fixed. But hypothetically a bug in the network stack could still leave an internet connected machine vulnerable.
Do not connect it directly - use a dedicated router device.
You benefit from the fact that most machines are patched. If a lot more people used 2016 builds and didn’t patch you’d see a lot more exploits.
I use stock Win7 SP1 with just a couple updates (recently TLS and SHA-512, but only 27 hotfixes in total) and the only way to break something is if I deliberately run unverified executables that were manually downloaded from untrusted sources. And since I don't do this - my machine is still running the same installation that I did on December 24th 2014.
https://www.shodan.io/search/facet?query=windows&facet=vuln....
> browsing all kinds of sketchy sites with Firefox and chrome
How did you install those - downloaded via another system? Because with that old system, you are missing ssl certificates (Firefox and Chrome bring their own).
Maybe, but with good old Windows PKI you’re bound to still have a working chain of trust with Mozilla/Google.
…either that or the machine cheated and updated root CAs in the background (which isn’t Windows Update-controlled anymore).
How do you know your system weren't infected in that experiment?
By reinstating the ownership of those files.
Since the rest of the world updates their PC's, malware authors rarely focus on exploiting older versions.
Both Chrome and Windows are now in that position.
Basically, unless you are of interest to state level attackers, in 2025 even unpatched Chrome/Windows wont get drive by exploited.
Path traversal attacks against IIS (or any web server) are still routine yet those were fixed back in the Win 2K days.
Your thought process is not correct.
That seems like pretty sketchy reasoning.
Like leaving your door unlocked, because you live in such a sketchy neighbourhood that everyone else always locks their doors.
It would make sense if the cost/danger for the thieves to check every door would be prohibitive. Unfortunately, with networked computers, checking the doors is usually both riskless and effectively free.
And turning off your old door checker, just because someone fixed the vulnerability in the latest version, is probably more hassle than it's worth.
More like, continue living in a sketchy neighbourhood because all the thieves go to the newer, more polished neighbourhoods anyway.
There are still active attacks against DOS and Win98. Automated driveby attacks, just looking to increase the size of a bot farm. There are still new exploits being released against rather old systems.
Now I'm curious, how do you attack DOS? I mean, it comes without networking support, and if you have local access, you're already privileged.
You attack the networking stacks for it, those are still actively developed (mTCP was last updated Jan 2025) as businesses use networked DOS for quite a few things. A DOS networking stack consists of a packet driver, a NIC driver, and a protocol library. All of those have attack surface. NIC drivers in particular often haven't really had updates since they were first released. Because for hardware manufacturers of the time the goal was on getting people to use the hardware, not on supporting them. There are newer DOS NIC drivers than you'd think too. Realtek last I checked still makes and supports an ISA NIC.
So you are not talking about attacking old code at all, but networking stacks that are indeed actively developed? That feels like a very different ball game from attacking Win98, even if the platform they are running on top of is old.
It's a complicated space. There are attacks on both maintained and unmaintained stacks. There are definitely attacks against windows 95/98 too because people have things like mills or other industrial automation that are powered by those OSes still connected to the internet. There is a lot of SCADA[1] too that fits that bill. It's easy to think "but why wasn't this replaced!" and the answer is almost always "cost or process certification". If the operator is lucky and has good networking folks all of this is in a very very well firewalled VLAN. But, never underestimate the amount of people that are not that savvy and just have it plugged into the internet.
For anyone saying these aren't targets, no they are probably already hacked. These are the things that keep the national security folks up at night knowing an adversary has them already backdoored and set up for take down. Moreover if they execute on that they would go for maximum damage first to either create chaos, or prevent the system from being repaired easily.
[1]https://en.wikipedia.org/wiki/SCADA#Security
Would suck if an exploit was present for years, sometimes decades. Would especially suck if people piled up old exploits and fell back on them as needed.
Imagine if this was all automated, even scripted, so even kiddies could do it, or others with almost zero security knowledge.
I'd really, really like to think most of us don't follow this terrible security practice based on a bad premise.
Everything was a zero-day at one point in time. The effort is indeed usually put in whilst it is the current version. But retying all old malware isn't effort; it is more or less the definition of script-kiddy (though state level attackers will do it too).
Those of us who actually do this stuff for a living still routinely see probes for Slammer, Zotob, Blaster and more from when we booted our computers by rubbing two sticks together.
Actually riddle me this: what if you want to exploit exactly the type of person to disable updates? They are potentially more lucrative targets if nobody else targets them. Just a thought. It's sort of how "delete me" services profit off paranoia, they're a lucrative market because of the paranoia.
It does have that. Windows uses code signing and either DISM or SFC to do that.
But this isn't about the binaries. It's where definitions and configuration are stored. It's C:\ProgramData, not C:\Program Files.
The system also can't object too severely. Third party endpoint protection exists.
This is about the binaries. I first tried renaming the folder in Program Files, but Defender still kept eating RAM and CPU resources which were scarce on a 12-year-old laptop.
My bad. You correctly understood my mistake here. I assumed it was clobbering a binary
> Third party endpoint protection exists.
much to everyone's dismay. :/
That is basically how a popular product does it,while taking down about 25% of the entire internet...
Are you talking about the recent CrowdStrike screwup?
I see what you did there.
FYI, WSC stands for Windows Security Center.
Thank you for the help. It is really frustrating when authors do not define an acronym when it is first introduced in the text.
But they do:
> The part of the system that manages all this mess is called Windows Security Center - WSC for short.
It needs to be closer to where the acronym is first introduced. The definition, on my screen, is below the fold so it can not be seen in context of where the acronym is first introduced. If it was defined below the title, I would understand.
* https://apastyle.apa.org/style-grammar-guidelines/abbreviati...
* https://www.stylemanual.gov.au/grammar-punctuation-and-conve...
* https://learn.microsoft.com/en-us/style-guide/acronyms
I do a lot of copy editing for clarity and non-native speakers so I have keep these things in mind. ¯\_(ツ)_/¯
This is a somewhat useful feedback, however I am not too sure how this can be fixed given the structure of my blog post. Do you think if I just add a line `*WSC is short for Windows Security Center` in the first paragraph this will be enough?
My suggestion:
In this post I will briefly describe the journey I went through while implementing defendnot, a tool that disables Windows Defender by using the Windows Security Center (WSC) service API directly.
thank you! i changed the first paragraph to include these changes
Ah that makes sense. I saw this subthread and was quite confused because WSC was clearly and obviously defined in the first sentence.
Now I see why. Thanks for incorporating the feedback! It had a positive impact for me coming later to this article.
Appreciated, thank you!~ \( ̄︶ ̄*\))
Or use the abbr (and its title attribute) that was designed for that purpose; no extraneous "flow" breaking required. Mobile people can long press on the indicator to read more, everyone who magically knew what WSC gets to continue to know what WSC means
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
The typical solution, is to include the expansion in brackets after the first use.
Simple rule I learned on my Electronic Engineering degree (where we're guilty of many, many acronyms): When you write an acronym/initialism in a paper (or anywhere for others to read reall), assume the reader doesn't know what it stands for and include the expansion in brackets immediately after the first use.
EDIT: As my sibling comment also suggests, writing it in full the first time, and using the acronym/initialism in brackets is also acceptable.
Just wondering is this Slack? Just wondering what kind of logging flow you’re using.
https://blog.es3n1n.eu/posts/how-i-ruined-my-vacation/pics/p...
Looks like Discord.
this is discord in "Compact" theme
At least that one is defined later on. I'm still scratching my head over "CTF".
[Edit - could be Capture The Flag?]
You're right, that never gets defined. Yes, Capture The Flag cybersecurity sort of competition I think
https://news.ycombinator.com/item?id=43960389
They do. They understandably shorten it in the title, but then they define the acronym the first time they use it in the article.
> the project blew up quite a bit and gained ~1.5k stars, after that the developers of the antivirus I was using filed a DMCA takedown request
I got really, really confused after that statement, because I don't understand what "the antivirus I was using" means and why they would have a reason to send the author a DMCA.
I think it means the author reverse-engineered another antivirus and put parts of it in their open-source project. But it could also mean other things. Skimming I see a heading with "Impersonating WinDefend".
So is the jist that the author somehow broke some kind of copyright law?
My understanding is he used the carcass of another AV tool to bypass signature requirements which is understandably grey (there's an argument for it being transformative, IMO but IANAL).
yes, they broke copywright law by copying part of an existing AV program.
From the paragraph directly before the one you quoted:
The way how my project worked is that it was using a thirdparty code from some already existing antivirus and forced that av to register the antivirus in WSC.
This is cursed:
https://github.com/es3n1n/defendnot/blob/master/defendnot-lo...
If you're curious what's actually going on there:
https://github.com/es3n1n/defendnot/blob/master/cxx-shared/s...
can someone well versed in explaining CPP magic explain what is going on and why it is cursed?
We're starting with this code:
Using the macros in the second linked file, this expands to: * The 1234 is whatever the line number is, which makes the variable name unique.* auto means infer the type of this local variable from the expression after the =.
* Defer{} means default construct a Defer instance. Defer is an empty type, but it allows the % following it to call a specific function because...
* Defer has an overloaded operator%. It's a template function, which takes a callable object (type is the template parameter Callable) and returns a DeferHolder<Callable> instance.
* [&]()->void { /*code here*/ }; is C++ syntax for a lambda function that captures any variables it uses by address (that's the [&] bit), takes no parameters (that's the () bit) and returns nothing (that's the ->void bit). The code goes in braces.
* DeferHolder calls the function it holds when it is destroyed.
It's subjective but some (including me!) would say it's cursed because it's using a macro to make something that almost looks like C++ syntax but isn't quite. I'm pretty confident with C++ but I had no idea what was going on at first (except, "surely this is using macros somehow ... right?"). [Edit: After some thought, I think the most confusing aspect is that defer->void looks like a method call through an object pointer rather than a trailing return type.]
I'd say it would be better to just be honest about its macroness, and also just do the extra typing of the [&] each time so the syntax of the lambda is all together. (You could then also simplify the implementation.) You end up with something like this:
Or if you go all in with no args lambda, you could shorten it to:That's interesting! So i assume that this macro allows code to get registered to be run after the 'current' scope exits.
But from my understanding (or lack thereof), the `auto _defer_instance_1234 =` is never referenced post construction. Why doesn't the compiler immediately detect that this object is unused and thus optimize away the object as soon as possible? Is it always guaranteed that the destructor gets called only after the current scope exits?
> Why doesn't the compiler immediately detect that this object is unused and thus optimize away the object as soon as possible? Is it always guaranteed that the destructor gets called only after the current scope exits?
Yes, exactly. The destructor is allowed to have some visible side effect such as closing a file handle or unlocking a mutex that could violate the assumption of the code in that block. (Even just freeing some memory could be an issue for code in the block.) It is guaranteed that the destructor is closed at the end of the block, and that all the destructors called in that way happen in reverse order to the order of their corresponding constructors.
Yes, this is guaranteed. The compiler cannot simply elide statements with effects.
I don't think we actually need `->void` -- shouldn't the compiler be able to infer the return type (or rather, absence thereof)? My experience is that the compiler only struggles when the return value needs to be implicitly converted to some other type.
Would it have looked any less cursed if it just read `defer { CoUninitialize(); };`?
Agreed that the simplest "fix" would be to just rename the macro to be all-caps.
> I don't think we actually need `->void`
Yes, agreed.
> Would it have looked any less cursed if it just read `defer { CoUninitialize(); };`?
It's subjective but personally I still hate it.
> Agreed that the simplest "fix" would be to just rename the macro to be all-caps.
Actually I think the bigger part of my suggestion is switching from an object-like macro to a function-like macro [1], which makes it all a bit less magical.
[1] https://stackoverflow.com/questions/36126687/function-like-m...
And, I personally hate macros that pretend to be functions but provide no visual indicator that they're not actually functions. For instance, `#define min(x, y) (x < y ? x : y)` evaluates its args multiple times. It's a little less bad when it only takes a single argument, but I am still irritated by things like htonl.
I think the "best" approach here would be to make it a function-like macro, and also change the name to all caps.
(Also, I tend to agree that `defer { ... };` is still cursed -- it requires the trailing semicolon, which further breaks the illusion of a keyword that takes a block scope.)
> * Defer has an overloaded operator%. It's a template function, which takes a callable object (type is the template parameter Callable) and returns a DeferHolder<Callable> instance.
Is there any reason to use operator% instead of a normal method call? Except possibly looking cool, which doesn't seem useful given that the call is hidden away in a macro anyway.
If you used a normal method call then there would need to be a corresponding close bracket at the end of the overall line of code, after the end of the lambda function. But the macro ("defer") only occurs at the start of the line, so it has no way to supply that close bracket. So the caller of the macro would have to supply it themselves. As I mentioned near the end of my comment, it seems like the defer macro is specifically engineered to avoid the caller needing a close bracket.
If you don't mind that, I said that you can "simplify the implementation" - what I meant was, as you say, you don't need the overloaded Defer::operator% (or indeed the Defer class at all). Instead you could do:
Disclaimer: I haven't tried it and I don't normally write macros so this could have glaring issues.A way to do the same thing that is less gross: https://github.com/abseil/abseil-cpp/blob/master/absl/cleanu...
Eh, there are better implementations that are less syntactically obtuse (no ->void) but other than that it’s fine. Fairly obvious what it’s supposed to do and I’ve needed similar things in the past. There’s a cppcon talk that use ->* operator for precedence reasons and the macro lets you use it like ‘defer { … };’
C++ sort-of guarantees that your objects' destructors will be called when they go out of scope.
So you can abuse this mechanic to 'register' things to be executed at the end of the current scope, almost no matter how you exit the current scope.
This is a class which implements a 'defer' mechanism, similar to Go and Javascript constructs, which do the same thing - delay execution of the given block until the current block scope is exited. Its pretty clever, actually, and quite useful.
I personally don't find it that cursed, but for many old C++ heads this may be an overwhelming smell - adding a class to implement what should be a language feature may tweak some folks' ideology a bit too far.
yeah sorry i didnt feel like implementing my own RAII stuff for all the COM thingies due to time constraints. it will be changed in the next update though
https://github.com/es3n1n/defendnot/pull/6
https://en.cppreference.com/w/cpp/experimental/scope_exit
Honestly if this isn't part of a public API this isn't very cursed in terms of C++, especially if you have a lot of one-off cleanup operations.
I think the only bit I don't like personally is the syntax. I normally implement defer as a macro to keep things clean. If done correctly it can look like a keyword: `defer []{ something(); };`.
I think the syntax is exactly why they're saying it's cursed. IMO your suggestion is no better - yes it makes defer look like a keyword, but it's not! As I said in a sibling comment, I think it's clearer if you're honest that you're using a macro: DEFER([](){something();});
Or you could even make a non-macro version (but then you need to think of variable names for each defer):
Sure, I've used __LINE__ for this before too, and yeah I agree that my keyword construction was too clever (seemed cool at the time, since the macro had a dangling = at the end to make it work).
Why did you write it with two structs though? You could do
and call it as Which looks much nicer. The preprocessor treats balanced curlies as one single token regardless of how many lines it spans, precisely to enable this usage.What's cursed about this? I use this pattern all over in my code although the signature at the callsite looks a bit different (personal preference).
D (for example) has the concept of statements that trigger at end of scope built into the language.
Code is a way you treat your coworkers - Michael Feather, https://x.com/mfeathers/status/1031176879577780224
TL;DR, not AI
The code defers a function call until the point in time that an object goes out of scope. The implementation uses C macros to create a more succinct syntax that omits parts of the necessary C lambda/unnamed function definition and to create a unique variable name for managing the deferred function call. However, the resulting syntax eschews the common convention of using UPPER CASE to denote C macros, and instead appears similar at first glance to a function call from an object pointer.
This can cause confusion if one is not familiar with this pattern and expects macros to be communicated differently. Some commenters say this is common enough, or useful enough to them, to be considered almost idiomatic in some contexts.
For technical explanation, https://news.ycombinator.com/item?id=43959403#43960905 provides a useful breakdown of how the macro works.
lol, i significantly improved my vacation by reverse engineering the virtual desktops on windows :) best memories of last year: reverse engineering is hellovafun!
learned a lot of interesting thing, namely there is an undocumented messaging underlying the RPC in windows: https://csandker.io/2022/05/24/Offensive-Windows-IPC-3-ALPC....
Why would you want to disable WSC?
Performance reasons? Malware development? Hacking?
Is there a more performant, less resource-crippling, antivirus for Windows?
Yes, a small number, but it changes each year due to AV vendors (including Microsoft) changing how their AV works. It also depends on whether one looks at the impact from passively running the antivirus vs actively running a scan.
A skilled user.
I understand and mostly support the idea of mandatory AV for the people who can barely handle the concept of a file system.
There is also a class of user forged in the fires of the primordial internet who would never in a trillion years be tricked into clicking a fake explorer.exe window in their browser.
Giving users choice is the best option. Certainly, make it very hard to disable the AV. But, don't make me go dig through DMCA'd repos and dark corners of the internet (!) to find a way to properly disable this bullshit.
Skilled in what exactly? In x-raying all data storages on a system with a naked eye and spotting there a malware? In sniffing ether around the system and smelling a malicious bits on the radio spectrum coming in? How does this skill works?
> How does this skill works?
I've been using computers for 40 years, have never installed and have always disabled malware scanners, and never had a virus. Maybe I'm special. But I'm not that special. There are 3 billion Android uses in the world, almost all of them don't have malware scanners, and almost all of them have never been infected by a malware. Ditto iPhone users.
To be fair, I haven't used Windows for the latter 1/2 of that 40 years. So maybe it's only Windows users who need to go around x-raying all data storages.
I've used computers a bit less since 90s, and I'm also careful not to do dumb stuff on it. But I can't guarantee that any of any PCs at any time is virus free, because I don't know it and can't know it. And that includes Linux btw, though statistically it is much safer. But Linux is beside the point, the whole topic is about removing a Windows component, and on Windows there are millions of different malware.
Most Android users have the malware scanner in Google Play Services enabled.
> There is also a class of user forged in the fires of the primordial internet who would never in a trillion years be tricked into clicking a fake explorer.exe window in their browser.
Until they've had a couple drinks. Might still need a more sophisticated fake than that, but they exist. I'm with you on the disabling part though: I think Apple gets it right with SIP, it takes a reboot in recovery mode to disable it temporarily and a single command while in recovery mode to make it permanent.
The worst is when they silently re-enable the AV with a mandatory update later.
It's called no antivirus. It's what this is supposed to do. Antiviruses are useless malware.
Ah yes, I have my Windows power user bingo card dusted off! So far in this thread I’ve got:
- Antivirus software is malware
- We have to disable Windows Updates because I didn’t like them 30 years ago
- Windows Defender hogs resources, laptop reviews showing Windows systems getting 10 hours of web browsing battery life are lying, Windows Defender actually ruins the performance of your computer
- It’s better to complain constantly about Windows and spend hours disabling functionality rather than switch to Linux
I’m just waiting for “Windows sucks I’m thinking about switching to Linux but never end up doing it” and I’ll have a bingo!
>Windows Defender hogs resources, laptop reviews showing Windows systems getting 10 hours of web browsing battery life are lying, Windows Defender actually ruins the performance of your computer
There are definitely times when I wish I could disable it outright. Often someone will want my help reviving an old computer or laptop and it'll have to sit for a day in a loop of windows update fighting windows defender for resources with neither of them making much headway before one or the other will finish enough to let the other run for a bit.
We use some software that stores each record in a separate file; basically using the filesystem as a database.
Without adding an exception to Windows defender, that software is unusably slow. Once the exception is added (or defender is turned off) the software is nice and fast again.
The solution there is adding the exception, not turning off Defender, especially when you don't have control over what other activities may take place on the system.
Exceptions are valid when scoped to a container where you reasonably expect to be the sole user of the data therein and it contains no executable code.
I honestly have never seen Defender behave with exceptions properly. Sometimes it does, sometimes it doesn't. Seems to depend on whether the day starts with a T.
It sounds like adding an exception is the intended way to do exactly what you’re doing and resolves the issue entirely.
It also sounds like you wrote bad software that didn’t consider the architecture of the parent OS.
While your first statement is reasonable, your second is uncharitable and hostile.
If Windows won't allow use of the filesystem as a database or cannot heuristically detect when a folder is being used as a store of data, Windows is wrong, not the developer.
Amusingly Microsoft ships exclusions for their own software, and states "Opting out of automatic exclusions might adversely impact performance, or result in data corruption. Automatic server role exclusions are optimized for Windows Server 2016, Windows Server 2019, Windows Server 2022, and Windows Server 2025."
https://learn.microsoft.com/en-us/defender-endpoint/configur...
> If Windows won't allow use of the filesystem as a database or cannot heuristically detect when a folder is being used as a store of data, Windows is wrong, not the developer.
I guess Nintendo is wrong for not giving you a file system at all on the Game Boy. The analogy may be extreme but that’s part of the point here: who are we to dictate Microsoft’s design goals and choice of compromises?
It’s really not Microsoft’s fault if their product doesn’t meet the specific needs of someone’s specific software use case.
I do agree that my first suggestion is the more sensible one, but my second one was more of a philosophical point. Windows has been the same old Windows for a long time and developers that don’t understand its limitations and requirements for deploying applications are more in the wrong than Microsoft in this scenario.
If Microsoft felt like the best design decision was to remove windows defender and that there was no negative impact to doing so they would have done it by now.
I'm sorry, but I believe your argument is extremely weak.
The Nintendo/Game Boy analogy doesn’t hold water. Nintendo doesn’t give you a filesystem on the Game Boy, but it certainly doesn’t stop you from implementing one yourself. Nintendo doesn’t include a filesystem because that’s not part of the Game Boy’s platform model; it’s a console with fundamentally different goals and constraints. If you require a virtual filesystem to load assets for your game, Nintendo _will not_ slow your cartridge ROM down.
Windows, on the other hand, has always shipped with a general-purpose filesystem and encourages developers to use it for data persistence, caching, configuration, and more. In fact, the Win32 API is deeply file-centric. Even the OS has its own hidden virtual filesystem.
Windows is a Unix-inspired CP/M derivative, and both lineages are strongly file-based. In fact, when Windows tried to replace the filesystem with a database in Longhorn, they failed spectacularly, and only a few pieces of that design are left today. What still exists, however, is a filesystem optimized for storing files.
Suggesting that developers are "in the wrong" for relying on the filesystem on an OS that has always promoted it is like blaming drivers for expecting roads to be usable. We’ve been building software on Windows that reads and writes files for decades, with Microsoft’s full blessing.
If Defender or related tooling starts punishing valid, decades-old patterns like using a folder as a key-value store, that’s not a failure of developers to "understand Windows". It’s a regression in the OS, or at least a poor balance of heuristics.
We absolutely should question Microsoft’s design goals if they break longstanding, legitimate use cases without offering workable alternatives. Being dominant doesn’t make them immune to critique, especially when their changes have real-world consequences for maintainable, cross-platform software using well-established techniques.
Why blame me? I didn't write the software, I only use it. But yes, I consider it badly written software due to that design. I would use SQLite for that particular use case. It would make the programming easier and more performant.
That “architecture of the parent OS” is so shitty they had to introduce a first party “Dev Drive” mode to disable said architecture wholesale so that developer workflows aren’t crippled. Think about that.
I assume you either don’t really know what you’re talking about, or are arguing in bad faith.
Oh, and people develop software for a living and sometimes that involves making sure the software works on Windows. Not everyone complaining is using Windows by choice.
But again if the architecture of the OS is shitty/doesn’t meet your needs at some point you gotta stop using it and stop complaining.
This whole topic is a massive eye roll.
In what universe is windows defender “resource-crippling?” There are windows laptops that will sip battery for an entire workday plus extra hours while running defender the entire time. So clearly it’s not “resource-crippling” if it can run on a laptop with a single digit wattage power draw.
And then we’ve got the “I need to control my system I’m too smart for antivirus” folks all over this thread.
Well, if you’re so smart why are you using a consumer OS designed for idiots?
(I like OP’s tongue-in-cheek work and post a whole lot better than the neckbeard army describing how Windows is broken and totally doesn’t work and how we have to disable updates and antivirus because we are power users I guess so we just do that for no reason)
> In what universe is windows defender “resource-crippling?”
This one? Not all of us want to throw perfectly usable hardware in the e-waste pile. Windows 10 was perfectly fine on my old Haswell miniPC, save for Defender wasting CPU cycles and IO doing..."checks".
Let’s cut the bullshit, Defender is basically unchanged as a concept since Windows Vista or maybe even Windows XP. It runs completely fine on 15 year old hardware.
We are in the “Windows users complain endlessly and refuse to switch to Linux” bingo card right now. Windows has been this way since before you bought that mini PC.
> Let’s cut the bullshit, Defender is basically unchanged as a concept since Windows Vista or maybe even Windows XP. It runs completely fine on 15 year old hardware.
Exactly. It's the same legacy scan every fucking thing you open AV architecture.
Back in the day of spinning disks it probably wouldn't have been too noticeable for the AV to marshal scanning to its usermode service and the filesystem to pull the data from cache for the original request afterwards. However now that we have 10GB/s+ capable SSDs the factor of slowdown is exponentially larger.
I can run ripgrep on a massive directory, make myself a cup of tea and return to it still searching for matches versus being done in < 10 seconds with defender disabled.
Yeah so like, every time I ran AV software it was quite obvious where the paranoia settings were, and how to tone down the aggressive "scan everything everywhere every time" settings.
For 98% of systems, there is probably no reason to scan every file on opening it. If people have enabled that setting, or left that default on, then that's their problem; it's not Windows Defender's fault.
My current AV dashboards are screaming at me that I'm only 35% protected. That's because I've exercised a lot of prudence in enabling paranoid settings, based on my rather limited and simplistic threat modeling. Installing AV software comes with the understanding that it can steal resources, but they nearly always have plenty of settings that can be disabled and win back your system responsiveness.
I am beginning to believe that commenters giving bingo-card winnings are not the brightest bulbs in the Windows MCSE pool, honestly. I can relate: Linux and Unix admin in general is far more intuitive and comfortable for me, so I have generally stayed on that side of things, but knowing how to properly set up Windows is an indispensable life skill for anyone.
> If people have enabled that setting, or left that default on, then that's their problem; it's not Windows Defender's fault.
There is no such setting for Defender. The file scanning is either on or defender is completely off. To even access some of the better stuff like ASR rules (that are disabled by default) you need third-party software or pay for their enterprise offering.
Consumer Defender literally has like 4 toggles in total. It's a dumbed down and extremely permissive AV because it runs on every Windows machine.
>In what universe is windows defender “resource-crippling?”
In any universe where you do a lot of small file IO. I'm not saying that other AV isn't far worse, but on access/write/delete AV massively kills performance when you do anything that creates/deletes tons of small files.
If you are a threat actor, you could get lucky and there isn't another Endpoint Detection and Response product installed, which would almost certainly intercept this.
If you are an EDR vendor, this is an obfuscated API call that EDR vendors can use to suppress or disable the Windows Firewall. CrowdStrike for example, can do either I believe, use Windows Firewall or use their implementation.
because all antivirus softwares are at least powerviruses.
i do not care for anyone baby sitting me telling me that netcat.exe is a no no
It’s my hardware. I’ll do what I want with it, m8.
Simple as that.
Well this is a straightforward sentiment with a real "my body, my choice" ring to it, isn't it? Until it isn't.
Perhaps your hardware, when connected to a network, has real effects on the rest of that network. What if your system joined a botnet and began DDOS activities for payment? What if your system was part of a residential proxy network, and could be rented in the grey market for any kind of use or abuse of others' systems? What if your system became a host for CSAM or copyright-violating materials, unbeknownst to you, until the authorities confiscated it?
And what if your hardware had a special privileged location on a corporate network, or you operated a VPC with some valuable assets, and that was compromised and commandeered by a state-level threat actor? Is it still "your hardware, your choice"? Or do your bad choices affect other people as well?
Man that is a silly line of thought. Your conclusion now has to be that all freedom is bad because peoples choices can have ramifications, yeah?
Oh, you chose to buy new shoes even though they were too tight which distracted you for 1 sec in your car on the way home, due to the discomfort, so you hit someone and they died.
Clearly people can not be trusted to buy their own shoes!
I got measles just reading this
There's the "Malicious Software Removal Tool" for that case.
I presume you use Apple products, right?
I guess I have to start audit all devices that connect to my home internet...oh wait
Geez what a cluster* of a comment. You mix in a bunch of theoreticals you came up with in 5 seconds that cover different domains and then don't actually go to the effort of critically examining your own statements, which is appreciated and makes for much higher quality comments.
>Perhaps your hardware, when connected to a network, has real effects on the rest of that network. What if your system joined a botnet and began DDOS activities for payment? What if your system was part of a residential proxy network, and could be rented in the grey market for any kind of use or abuse of others' systems?
This at least is "you, affecting others". But the obvious immediate response is that such things done via the network can be mitigated or blocked at the network layer, and indeed must be anyway since attackers are doing such things from across the world 24/7 regardless. I'd fully support ISPs having to throttle or even potentially block-until-fixed any customers who participate in active network attacks, and other parts of the internet throttling or black listing ISPs that refused to cooperate. But making someone deal with the consequences of their choices is no reason to deny them the choices in the first place, given that most of those making such choices are not, in fact, actually going to end up doing any of what you listed.
>What if your system became a host for CSAM or copyright-violating materials, unbeknownst to you, until the authorities confiscated it?
Here (and seriously ZOMG THINK OF THE CHILDREN, lol really? on HN, in 2025?) you veer off into personal consequences to the person making the choice, as opposed to them being part of an attack on others. This is just saying "there could be risks to you if you mess it up!" which is a complete non-statement.
>And what if your hardware had a special privileged location on a corporate network, or you operated a VPC with some valuable assets, and that was compromised and commandeered by a state-level threat actor? Is it still "your hardware, your choice"? Or do your bad choices affect other people as well?
Um. Hello? Why is corporate IT allowing you to BYOD to a special privileged location on the corporate network without even so much as any sort of management agreement or contractual responsibilities? At this point you've veered off the road of reality. Because in actual reality you don't own hardware in special privileged locations or at least don't have full choice over it by your own agreement. And if that's not the case hooboy is there a kind of a lot of other fundamental issues there. That's not an argument for a blanket universal policy.
[flagged]
cost-benefit. the time/electricity/battery/frustration cost of windows defender dwarfs its utility. i’d be better off with some east euro hackerman’s crypto miner running in the background than WSC. at least hackerman knows how to not peg my CPU at 90% while he’s mining his moneros.
Because why would you want to rootkit yourself on purpose?
I recently read https://nostarch.com/windows-security-internals and this makes it much more relatable. I've know a bit about how alot of this back stuff works in Windows, but the timing is great - the last chapter of that book really goes into the same detail this author went about tokens and sids.
For those wondering:
WSC stands for Windows Security Center.
I had to look it up as well
> The part of the system that manages all this mess is called Windows Security Center - WSC for short.
It’s in the article
true, but you have to read until the 4th paragraph to find it. Putting it in the title would have been better
Fair point
What's worse for me is that the Check Point Harmony does notnutilize the interfaces of Defender crafted for this purpose, but write a knowledge base article to tell the users to disable the Defender themselves.
> As you might still remember, I was working on an arm64 macbook and there currently is no sane solutions how to emulate x86 windows on arm macbooks.
What about UTM? Also Parallels recently added initial support for Intel VMs as well.
The dynarec systems in QEMU aren't as efficient as the native dynarec systems in Windows and macOS (Rosetta 2). You can definitely run x86 Windows with UTM, and it works, but the performance characteristics are pretty poor. From a utility perspective, I've found that running an ARM Windows VM and using Windows' dynarec system to run x86 apps, or using WINE (both using native compiled subsystem code) is a much better experience. It's one of those things where it's okay if you need to run a workload in a pinch.
I'm not sure if performance characteristics are part of what the OP considers "sane", but if it is, I get the position.
I tried UTM and it's unusable for x86 Windows.
Maybe command line Linux would be acceptably slow, but anything with a GUI isn't.
You can run arm64 Windows pretty well, but that's not x86 Windows and won't help with reverse engineering an x86 system component.
I hadn’t tried it myself I just knew it could run it, sucks to hear it’s so unusable.
It depends on what you need though, because arm windows has its own rosetta-like translation and does run x86 applications.
I set up a windows arm inside an UTM VM as a test, then installed visual studio (not code!) which is an x86 application and it was pretty much usable.
The codebase i was working on was complaining about missing some OpenGL parts so I stopped and haven't investigated further (I have x86 boxes for working on it). But depending on your requirements the above setup may be just fine(tm).
Correct me if I'm wrong, but isn't the emulation of an MMU-equipped CPU a fundamentally slow and unoptimizable task? Apple's Rosetta and its Microsoft equivalent only work as fast as they do because they only run userspace code so they don't have to emulate the MMU.
I wouldn't go as far as to say unoptimizable, but it's certainly harder, particularly if your emulator is running in userspace (like Rosetta does).
What does CTF stand for?
A security competition of sorts https://en.wikipedia.org/wiki/Capture_the_flag_%28cybersecur...
Cool, but why would you need to travel to do that? They mentioned they often travel with 2 laptops for CTF.
> My current main machine for non-ctf things is an M4Pro MacBook, and usually, when I am going for a CTF I bring an another x86 laptop with me to do some extensive reverse engineering/pwn stuff as it is usually built for the x86 cpus. Emulation would kind of work for this task but it is pretty painful so I just use an another laptop for all the x86 stuff.
Seems like they don't want to bother with emulation when many of the challenges are not compatible with their main computer.
https://en.wikipedia.org/wiki/Capture_the_flag_(cybersecurit... I believe
Every time I see anime characters in pfp, I know it’s going to be a good write up. Thanks for sharing.
Keeping this saved in case I return to a crappy windows env.
no idea there was so much going on behind the scenes of defendnot (I feel like someone sent it to me earlier; thought it was super cool)
Lmao reverse engineering WSC on vacation sounds like some real dedication - honestly can't tell if that's commitment or just a cry for help. Made me think: if tuning all this stuff gives you a headache, would you rather have max security or just peace of mind and a fast machine?
> Max security or just peace of mind and a fast machine
Or, to avoid making that choice at all, just don't use Windows.
There's plenty of other insecure systems.
Windows in its entirety is security theatre. WSC is an example of this
This is a godsend. I should send you a jar of KimChee for this. Please return to Seoul, and enjoy the sights. South Korea is one of the most beautiful countries in the world. Try to plan into corrispond to either the cherry blossoms falling in the spring, or the leaves falling in the fall.
I miss Seoul.
Will you go back? Holidays, or are you from there?
"Busan is Good"
<3
Is the point to actually disable defender or to highlight a vulnerability?
I think the point is to disable defender: Air-gapped machines, kiosks, industrial applications, and so on, have no need to eat gobs of ram and waste loads of cpu checking the same files over and over again. For other applications, WD provides dubious benefits. It is annoying that there isn't a switch that says "I know how to operate a computer".
Evildoers don't need to bother with this: If they have access at this point you've got other problems.
Microsoft may extend WD to detect/block this vector since it is using undocumented interfaces; Microsoft would absolutely prefer you buy more cores, and if you're not going to do that, collect some additional licensing revenue through some other way.
> It is annoying that there isn't a switch that says "I know how to operate a computer".
I found one such switch: Install Linux
Why would Microsoft care how much money I spend with my CPU core vendor?
Because Microsoft charges per core:
https://www.microsoft.com/en-us/windows-server/pricing
Oh, that's for the server.
Somehow when I hear 'Windows' I only think of desktop use, not servers.
That is one possible point, but om machines with low memory, (like a lab full of 8Gb potatoes) this is a godsend. These lab PCs are so stripped down, that the only thing using most of the memory is WD.
You should be able to make a normal mode to run full security and a gaming mode just run a semi large game,and yes, this does expose a vulnerability,but it can be easily brought back up.
Oof, really? Haven't really used windows much after 7, but it always seemed to me defender was pretty lightweight. At least compared to all the other products where just opening the UI would lag out the average machine.
[flagged]
[flagged]
[flagged]
This is literally Hacker News :)