Things we can do better than Windows

Here you can discuss ReactOS related topics.

Moderator: Moderator Team

PurpleGurl
Posts: 1788
Joined: Fri Aug 07, 2009 5:11 am
Location: USA

Things we can do better than Windows

Post by PurpleGurl »

I am sure there are many things we can do that is better than how Windows does things, and without breaking compatibility. Here are some that come to mind, others may have more.

1. Memory access. Theoretically, it is possible to address 64 GBs in both 32-bit and 64-bit mode, since the processors use 36-bit addressing. However, MS decided to limit it to 32-bit addressing, and have done things which make memory above that incompatible.

2. CPU support. In due time, when we get SMP and other multiple processor support, I see no need to impose artificial limits on the number of CPUs or cores to use, just to satisfy some arbitrary licensing provision. MS uses this scheme to sell more expensive varieties of a version. XP Home only allows 1 physical processor, and it allows more cores than that - regardless whether they are true-cores, or Intel-style hyper-threading cores.

3. System Restore. MS's System Restore has a number of quirks about it. For instance, it automatically defaults to turning itself on for all drives and all new drives when discovered. For my setup, that makes little sense. There is no risk crashing the computer if I lose any partition other than C. If something deletes all my music, videos, or uninstalled programs, that won't trash Windows, render my system unbootable, nor cause any system malfunctions. So I have to manually disable for all the other drives and then delete the System Volume Information folders on them. And if I plug in an external drive, it automatically monitors it too (the first time it is used or if I delete the hidden device entries for its partitions). We could even come up with a better way of implementing it, just as long as all the APIs are there and behave the same. System Restore is a very intimate process and nothing outside Windows talks to it directly, and programs that call it are agnostic about how it works. We could put all the super hidden SR files in a tarball if we wanted, just as long as the service interacts with software the same way.

4. Less registry clutter. Windows automatically adds registry entries for things related to Windows 3.1, as well as a number of invalid registry entries. I know Mark R. of Microsoft hates registry cleaners, but I have ones I trust that I run, and the first time I install almost any copy of Windows, there are always a bunch of useless/invalid registry entries. Another way Windows clutters the registry is the Control Set backup mechanism. It may keep up to 99 copies of that one tree in its hive. But there is no way to prune it in Windows. I know that is an important feature (Last Used) and provides the ability to roll back the environment in case it doesn't boot. In addition, more Windows registry junk includes things like keyboard and time settings for countries and zones that are not your own.

5. More options to disable logging. Sure we should generate logs and events. However, why not have options to disable all that for those who have their system tuned and have no trouble? That could help with disk access and fragmentation, and make things more responsive (like in games).

6. Lack of FAT32 defragmentation API bug. NT-based defragmenters don't do a good job compacting files. Part of it is that the defragmenters are written for a multitasking or server environment, and is designed to run while software is using the drive. Windows 98 defragged much better with its own utility or with certain 3rd party utilities like Norton. However, to protect integrity, the Windows 95/98/ME defragger and others refused to compete for disk access. When I installed Win 2K, I noticed that the defragger was not thorough, it refused to defrag FAT32 folders, and Norton would not only refuse to defrag the folders, it would even refuse to sort them. The reason for that behavior of not touching the folders is a bug in FAT32 folder defragmenting in NT-compatible OSs. The authors all knew about this bug and refused to sort or defrag FAT32 folders. However, they will do so on NTFS volumes.

If we produce a version without the FAT32 folder-related API bugs, I imagine defrag authors would be willing to check for ROS and defragment the folders while running ROS. So if we made it work properly and not corrupt the drives, it would not pose a compatibility problem since only fools would use the broken FAT32 directory defragmenting calls, and I've found no NT-compatible defraggers written by fools.

7. Ability to move window tabs. I still prefer windows over tabs for most tasks. Still, it would be nice to be able to drag the window tabs in the tray and rearrange their tray order. For those who don't like it (like if they keep reordering by mistake), we could provide a setting to disable this explorer feature.
Last edited by PurpleGurl on Wed May 18, 2011 6:33 pm, edited 4 times in total.

Techno Mage
Posts: 89
Joined: Mon Nov 28, 2005 2:05 pm

Re: Things we can do better than Windows

Post by Techno Mage »

Since ReactOS is going for full compatibility number 1 would break so many applications, especially ones that talk to each other.
Virtualizing the address space is what 64 bit versions of windows do, but again 32bit programs are limited due to compatibility.
Blame it on Windows 3.1

Z98
Release Engineer
Posts: 3379
Joined: Tue May 02, 2006 8:16 pm
Contact:

Re: Things we can do better than Windows

Post by Z98 »

1. The limiting that MS did was because of drivers. Enabling PAE, which is the mechanism for allowing access to more than 4GB of RAM, resulted in drivers written for 32bit Windows by third parties to start breaking. I personally have no objection to what MS did here, as it was not until the Vista era that driver writers started getting their act together.

2. Once ROS actually gets multi-cpu support, the limitation will be based on the biggest system we are able to test ROS on. If the biggest machine the project can get its hands on is a 4 processor machine, I'm all for stating we only support up to 4 processors.

4. I'd suggest looking at the ReactOS registry to see if we have that stuff in the first place.

6. We are unlikely to fix any "bug" that produces a different behavior on ReactOS than it does on Windows. We don't want third party developers to have to create a separate version for ReactOS or base their code on the fact that it is running on ReactOS vs Windows. And as this is a FAT32, the priority of this issue would be very low on the list of developer time.

sh4ring4n
Posts: 120
Joined: Thu Oct 30, 2008 2:05 am
Location: Canada
Contact:

Re: Things we can do better than Windows

Post by sh4ring4n »

Z98 wrote:1. The limiting that MS did was because of drivers. Enabling PAE, which is the mechanism for allowing access to more than 4GB of RAM, resulted in drivers written for 32bit Windows by third parties to start breaking. I personally have no objection to what MS did here, as it was not until the Vista era that driver writers started getting their act together.
Hope it explains why this bug exist XD:

http://www.reactos.org/bugzilla/show_bug.cgi?id=6031
The cake is a lie!

PurpleGurl
Posts: 1788
Joined: Fri Aug 07, 2009 5:11 am
Location: USA

Re: Things we can do better than Windows

Post by PurpleGurl »

Techno Mage wrote:Since ReactOS is going for full compatibility number 1 would break so many applications, especially ones that talk to each other.
Virtualizing the address space is what 64 bit versions of windows do, but again 32bit programs are limited due to compatibility.
Blame it on Windows 3.1
True, however, for the 32-bit, we could provide a switch (reg value) and trust users to know when and if to change it. Savvy users of Windows XP SP2/SP3 and Vista hacked out these safeguards. Most drivers are compatible with being run that high. A lot of the incompatibilities started when Window rewrote HAL. It is so interesting that something breaks in Windows when they have serious plans for a new version. So take away features, blame compatibility, then charge more to give you what you had before they took it away. What the 64-bit drivers did was give them the opportunity to break from old standards which made the incompatibility possible.

I believe Windows should have stayed out of it and let the forces of "capitalism" fix this problem. I mean, back when ATI had really bad drivers, customers took their products back and bought a competitor's products. I did that with maybe 2 of their video cards. The drivers would not install. Yet I had no problems with NVidia cards or their drivers at that time. When enough people returned products and/or complained, ATI listened and fired their entire driver team and hired those with more experience writing drivers and installers. Then they once again gave NVidia a run for the money.

User avatar
Black_Fox
Posts: 1584
Joined: Fri Feb 15, 2008 9:44 pm
Location: Czechia

Re: Things we can do better than Windows

Post by Black_Fox »

PurpleGurl wrote:ATI listened and fired their entire driver team and hired those with more experience
may as well be "AMD bought ATI and told the driver team to get their act together" </OT>

PurpleGurl
Posts: 1788
Joined: Fri Aug 07, 2009 5:11 am
Location: USA

Re: Things we can do better than Windows

Post by PurpleGurl »

Z98 wrote:1. The limiting that MS did was because of drivers. Enabling PAE, which is the mechanism for allowing access to more than 4GB of RAM, resulted in drivers written for 32bit Windows by third parties to start breaking. I personally have no objection to what MS did here, as it was not until the Vista era that driver writers started getting their act together.
Actually, a lot of drivers behaved using PAE even as early as XP-RTM. Then Microsoft rewrote HAL in a way that was incompatible with a lot of drivers, and then imposed a limit to cover up their goof-up and to "protect" everyone from a few bad drivers which maybe most folks won't encounter anyway. So why not a registry key if we include the PAE code at all?

Besides, isn't there a new way besides PAE to use the memory between 4 and 64GB?
Z98 wrote: 2. Once ROS actually gets multi-cpu support, the limitation will be based on the biggest system we are able to test ROS on. If the biggest machine the project can get its hands on is a 4 processor machine, I'm all for stating we only support up to 4 processors.
What is the difference between running it on a 2 core, 3 core, or 4 core? I mean, why would it be unreasonable to scale or interpolate how this works? If the code is designed to be scalable, then why should we expect a 6-core to behave differently than a 4-core? So instead of a hard limit, why not a soft limit like a registry key? Then testers and the brave can override it. My next machine will be a hex core, and AMD has an 8-core in the works. However, from what I heard, it might not be 8 true cores, but something closer to Intel's hyperthreading. It might be like 4 dual-threaded cores reporting as 8 or something. They haven't released much on the specs yet.
Z98 wrote: 4. I'd suggest looking at the ReactOS registry to see if we have that stuff in the first place.
I would have to run registry cleaners and other utilities I use and see.
Z98 wrote: 6. We are unlikely to fix any "bug" that produces a different behavior on ReactOS than it does on Windows. We don't want third party developers to have to create a separate version for ReactOS or base their code on the fact that it is running on ReactOS vs Windows. And as this is a FAT32, the priority of this issue would be very low on the list of developer time.
However, fixing that bug would not cause ANY incompatibility, since no responsible defragger for any 32-bit NT version will even use the broken APIs, so they refuse to defragment folders. If there are any irresponsible programs or someone crazy enough to use one intended for 95/98, then implementing the required calls properly would keep them from corrupting their drive. Even if the required APIs for this ability were omitted, it would not break program compatibility with defraggers intended for 2000/XP/etc., since they know better than to use them. So it is certainly of low priority, and even then, to realize the corrected calls would mean either writing our own defrag or getting some of the available defraggers to test for ROS. If ROS then defragment FAT32 folders, otherwise, ignore and just stick to files as is typical defrag behavior on 2K/XP/etc. Anyway, I see no harm in fixing that, since existing software has no business using it. So I say we either make them work as intended without corrupting the drive or remove (or not add) them - thus crashing whatever software which uses the broken function and preventing corruption that way.
Black_Fox wrote: may as well be "AMD bought ATI and told the driver team to get their act together" </OT>
Actually, AMD bought them out after they replaced their driver team from what I understand. AMD didn't help ATI, at least not immediately after first acquiring them. AMD made several tactical blunders. They shouldn't have bought ATI when they did. It is wise to always keep a financial buffer in case of unforeseen problems or mismanagement. Then they killed of their own most popular CPU at the time. When they were in a bind from acquiring ATI and cutting production of their flagship product, they made a 3rd blunder. That was to cut most of their R&D team. So over-commit yourself financially, get rid of your best money maker at the time, then get rid of the people who could design an even better product.

I also just thought of another idea. I will add it to my first post.
Last edited by PurpleGurl on Wed Dec 21, 2011 5:01 pm, edited 1 time in total.

Z98
Release Engineer
Posts: 3379
Joined: Tue May 02, 2006 8:16 pm
Contact:

Re: Things we can do better than Windows

Post by Z98 »

I am not aware of any additional mechanism to allow addressing of memory beyond 4GB besides PAE. If you honestly need more RAM, get a 64bit processor and a 64bit OS. For ROS, I'd much rather we spend effort completing the 64bit port than try to work around an architectural limitation that is rooted in x86.

Scaling support up for more and more CPUs is not a trivial problem. We could have code that works for 2 CPUs, but then basically spends an exorbitant amount of time doing bookkeeping at 64 CPUs because we did not test for that case and did not uncover issues with whatever mechanism we use. This same problem happens in user applications when a naive attempt to multithread it happens. The implementation might work well with two or three threads, but at higher numbers spend so much time locking, unlocking, and exchanging data that the whole system slows to a crawl. And no matter how well designed the architecture, bugs will creep into the implementation. This is why having physical access to machines with higher core counts is the only way to guarantee the correctness of both design and implementation.

One clarification. By support, I mean we state we know it works up to said core count. Anything higher and you'll need to literally give us access to the machine if you want us to debug an issue.

If we don't have a function implemented, calling it will achieve nothing. So any "bad" defragmentation programs that do call these "broken" functions won't end up doing anything anyway. I'd say problem solved, especially if "good" defragmentation programs do not make use of the broken functions in the first place. If no one uses them, there's no reason to implement them.

Techno Mage
Posts: 89
Joined: Mon Nov 28, 2005 2:05 pm

Re: Things we can do better than Windows

Post by Techno Mage »

Since not everyone here is a programmer, maybe we could have a section made up where people can post Pseudocode
People could for example post how multi-cpu would be handled, and other people could post modifications to it. Or explain why that would not work.

PurpleGurl
Posts: 1788
Joined: Fri Aug 07, 2009 5:11 am
Location: USA

Re: Things we can do better than Windows

Post by PurpleGurl »

Z98 wrote:I am not aware of any additional mechanism to allow addressing of memory beyond 4GB besides PAE. If you honestly need more RAM, get a 64bit processor and a 64bit OS. For ROS, I'd much rather we spend effort completing the 64bit port than try to work around an architectural limitation that is rooted in x86.

Scaling support up for more and more CPUs is not a trivial problem. We could have code that works for 2 CPUs, but then basically spends an exorbitant amount of time doing bookkeeping at 64 CPUs because we did not test for that case and did not uncover issues with whatever mechanism we use. This same problem happens in user applications when a naive attempt to multithread it happens. The implementation might work well with two or three threads, but at higher numbers spend so much time locking, unlocking, and exchanging data that the whole system slows to a crawl. And no matter how well designed the architecture, bugs will creep into the implementation. This is why having physical access to machines with higher core counts is the only way to guarantee the correctness of both design and implementation.

One clarification. By support, I mean we state we know it works up to said core count. Anything higher and you'll need to literally give us access to the machine if you want us to debug an issue.

If we don't have a function implemented, calling it will achieve nothing. So any "bad" defragmentation programs that do call these "broken" functions won't end up doing anything anyway. I'd say problem solved, especially if "good" defragmentation programs do not make use of the broken functions in the first place. If no one uses them, there's no reason to implement them.
I understand your reasoning.

Here is what I was referring to about another memory management scheme:
http://en.wikipedia.org/wiki/PSE-36

On SMP, well, I still see no harm in allowing users to override any core limit checks for testing purposes. It would be wise to default to what you know works and then allow changes by the users. That way, if more are not supported, what they have is stable. And it could help differentiate problems. I mean, someone could blame the SMP support when it might just be their machine. So if it doesn't work at default settings, then it cannot be blamed on any overrides, thus clarifying the source of the problem. That isn't to say that someone is not affected by multiple bugs, since the first one is the problem they would notice. Anyway, I understand what you mean by support now.

As for the broken function, having it in there and fixed would allow for 2 scenarios. We could provide our own defragmenter, and those who want to support us specifically can test for ROS and attempt to do what it won't do under NT stuff (not too much extra code for them). No NT-compatible machine can defragment fragmented FAT32 folders. I haven't seen anyone attempt to get direct access of the hardware and rely on their own FAT32 folder handling code, so I assume for now that without OS support, that is not possible to do at all. This is quite a handicap. Why should I settle for defragmented files stored in a directory that is all over the place? I have a kludgy way to defrag them anyway, but it may take multiple attempts since it relies on luck of the draw placement. What you do is create a new folder, name it to the old one after renaming it to something else and then dragging the files to the new location. But that doesn't always work since you may fragment the new one if it is of significant size (and it would be or it wouldn't be fragmented in the first place). Plus, you might luck out in having the replacement folder at the end of the disk or some place that is worse than before. With the glitchy calls, I don't know if folders can be moved. But that said, we have much more pressing issues to even consider this.
Techno Mage wrote:Since not everyone here is a programmer, maybe we could have a section made up where people can post Pseudocode
People could for example post how multi-cpu would be handled, and other people could post modifications to it. Or explain why that would not work.
I agree. I am in the category of "former coder." I wrote for real mode in QuickBasic, often using the PDQ alternative library set, and since that expensive alternative library included full sources, I used that, a DOS calls manual, and another reference book to teach myself some assembly. I even disassembled or traced my own programs to see what the compiler did, and I saw it was efficient is some places and inefficient in others. So I would write my own modules for TASM and then add to a library to include in my programs. However, I never learned to code for protected mode. So yes, I believe a forum for those who are logical and know how to think and how to get things done but who are coding illiterate is a great idea. So pseudo-code, flow charts, diagrams, etc., would all be on-topic. It would be a place to post more general ideas, even by those who don't know the specifics. Then actual developers could look at those to get ideas, and they would know what to do with them. Just like in real life, there are many "Monday Morning Quarterbacks." They know how to play the game (US football), but they are not out on the field.

And your idea could work with memory management, SMP/hyperthreading/multi-core, file systems, etc.

hto
Developer
Posts: 2193
Joined: Sun Oct 01, 2006 3:43 pm

Post by hto »

It can take much efforts to convert that pseudocode to real code — the devil is in the details. Better write real code from the beginning, then submit patches.

Techno Mage
Posts: 89
Joined: Mon Nov 28, 2005 2:05 pm

Re: Things we can do better than Windows

Post by Techno Mage »

None of the programming languages I use are anything like C or C++

Z98
Release Engineer
Posts: 3379
Joined: Tue May 02, 2006 8:16 pm
Contact:

Re: Things we can do better than Windows

Post by Z98 »

PurpleGurl wrote: On SMP, well, I still see no harm in allowing users to override any core limit checks for testing purposes. It would be wise to default to what you know works and then allow changes by the users. That way, if more are not supported, what they have is stable. And it could help differentiate problems. I mean, someone could blame the SMP support when it might just be their machine. So if it doesn't work at default settings, then it cannot be blamed on any overrides, thus clarifying the source of the problem. That isn't to say that someone is not affected by multiple bugs, since the first one is the problem they would notice. Anyway, I understand what you mean by support now.
You're either not understanding me or not reading what I wrote. I said support basically constitutes "we know it works up to this many cores and we make no guarantees about anything higher." Nowhere did I mention any kind of lockout of cores.
Techno Mage wrote:Since not everyone here is a programmer, maybe we could have a section made up where people can post Pseudocode
People could for example post how multi-cpu would be handled, and other people could post modifications to it. Or explain why that would not work.
What hto said, but even moreso. To understand how to achieve things like SMP, you need to know the mechanisms available in the processors for synchronization, atomic access, and cross-CPU signaling. These are not trivial and are fairly specific to each ISA. There are even subtle differences between what AMD and Intel have done in implementing x86 and x64 that can require a second binary for the lower level components. If you have enough of an understanding of these things to come up with a system to support SMP, then you're likely able to code them up yourself. Systems programming knowledge pretty much assumes familiarity with C and/or assembly, and it is highly unlikely one could achieve an understanding of the synchronization mechanisms provided by a processor without being knowledgeable about systems programming in the first place. One generally does not starting learning at that low a level without learning systems programming first, since systems programming provides much of the foundation that provides context for what you read about in the Intel or AMD manuals that talk about their processors.

PurpleGurl
Posts: 1788
Joined: Fri Aug 07, 2009 5:11 am
Location: USA

Re: Things we can do better than Windows

Post by PurpleGurl »

Z98 wrote: You're either not understanding me or not reading what I wrote. I said support basically constitutes "we know it works up to this many cores and we make no guarantees about anything higher." Nowhere did I mention any kind of lockout of cores.
I understood you well and I moved on to a different aspect. I still think a registry limit is a good idea in the early stages.

Moving on, I should have clarified what I meant by multi-CPU support in item #2. By artificial limits, I was referring to Microsoft's notion to limit numbers of processors based on license type (ie., pay more to use more CPUs or more than so many cores). So our limits should be technical rather than arbitrary or license-based. It is one thing to not provide OS support or tech support because of a learning curve. It is another to not provide support in the OS because of marketing reasons. So if any NT flavor can do something, and it is a valuable enough feature, then we should do it too, in time. XP Home is only licensed to a single physical processor and only so many cores, but the corporate and enterprise versions allow more.
Z98 wrote: What hto said, but even moreso. To understand how to achieve things like SMP, you need to know the mechanisms available in the processors for synchronization, atomic access, and cross-CPU signaling. These are not trivial and are fairly specific to each ISA. There are even subtle differences between what AMD and Intel have done in implementing x86 and x64 that can require a second binary for the lower level components. If you have enough of an understanding of these things to come up with a system to support SMP, then you're likely able to code them up yourself. Systems programming knowledge pretty much assumes familiarity with C and/or assembly, and it is highly unlikely one could achieve an understanding of the synchronization mechanisms provided by a processor without being knowledgeable about systems programming in the first place. One generally does not starting learning at that low a level without learning systems programming first, since systems programming provides much of the foundation that provides context for what you read about in the Intel or AMD manuals that talk about their processors.
Okay. So if someone knew enough of the inner workings to write accurate pseudo-code, they would have already coded it. So where would someone like me get started? I am sort of a solutions provider in generalities, but I would love to get involved more. I hate what little of C I was exposed to, but I guess I could swallow my pride and try to learn it. Programming on this level seems complicated and overwhelming for me. I know very little about the internal workings of Windows and Windows software and have never coded for protected mode, virtual mode, etc. Even the opcodes have expanded since I coded for MS-DOS. I probably should post over on the newbie forum if I am serious.

User avatar
Black_Fox
Posts: 1584
Joined: Fri Feb 15, 2008 9:44 pm
Location: Czechia

Re: Things we can do better than Windows

Post by Black_Fox »

PurpleGurl wrote:Moving on, I should have clarified what I meant by multi-CPU support in item #2. By artificial limits, I was referring to Microsoft's notion to limit numbers of processors based on license type (ie., pay more to use more CPUs or more than so many cores). So our limits should be technical rather than arbitrary or license-based. It is one thing to not provide OS support or tech support because of a learning curve. It is another to not provide support in the OS because of marketing reasons. So if any NT flavor can do something, and it is a valuable enough feature, then we should do it too, in time. XP Home is only licensed to a single physical processor and only so many cores, but the corporate and enterprise versions allow more.
Before this topic spans a few pages, allow me to tell you (and Z98) this: you basically agree with each other and there is little point discussing it further :) Maybe little clarification: If someone comes to irc "Hey, I have a problem, ReactOS BSODs when I <do something> on my 4-processor 32-cores-in-total machine.", but ROS devs say "OK, we cannot reproduce on any of our machines, the best we have is 3-processor 24-cores-in-total and it doesn't BSOD when we try to <do something>. It's not like we don't want to help you, but we can't as of yet without access to this kind of machine for debugging." That's the limit Z98 means.

Post Reply

Who is online

Users browsing this forum: Ahrefs [Bot], Google [Bot], Yandex [Bot] and 2 guests