Technology as a way of life

Life is the art of drawing without an eraser...

Last year Jammie Thomas lost her retrial against the RIAA and was ordered to pay $1.92 million in damages for sharing 24 songs using Kazaa.

Just days after a judge reduced the $1.92m damages bill handed down to Jammie Thomas to a slightly more sensible $54,000, her legal team have turned down an offer to settle for $25,000.
Outrage at the huge damages eventually resulted in Judge Davis lowering the damages last week to $54,000.
To try and bring the whole sorry episode to an end, yesterday the RIAA made an offer to Thomas to settle the case for $25,000, if she agreed to ask the judge to “vacate” last week’s decision, which means removing it from the record.
Her lawyers immediately rejected the offer, indicating they would only accept a settlement which means Thomas has to pay nothing.
The RIAA said that this rejection means that they will now challenge the judge’s decision last week to lower the $1.92m damages.
Will this ever come to an end? What a complete waste of money.
(Source:FreakBits.com)


LONDON (Reuters) - Memory chips for computers are likely to be in short supply by the second half of next year as consumers demand more capacity and companies embark on a delayed drive to replace PCs, industry tracker DRAMeXchange believes.

Prices for DRAM chips, the most common type of computer memory, have stabilized over the past two months after rising for most of the year as recession-struck chipmakers slashed capacity and capital spending, causing shortages.
DRAMeXchange forecast on Thursday that shipments of PCs would rise 13 percent next year, driven by notebooks, with 22.5 percent growth to 160 million units, and pared-down netbooks, set to rise 22 percent to 35 million units.
"DRAM will likely face a serious shortage in 2H10 triggered by the hot PC sales," DRAMeXchange said. "The DRAM price decline will likely be eased in 2Q10. That is, DRAM vendors will have a great opportunity to remain in profit for the whole year." Top U.S. memory chipmaker Micron on Tuesday delivered its first quarterly profit in nearly three years as rising prices lifted sales beyond expectations.
DRAMeXchange forecast that 2010 capital expenditure by DRAM vendors would rise 80 percent from this year's record low to $7.85 billion, rising to $10 billion to $12 billion by 2011 or 2012.
Industry leader Samsung would spend $2.6 billion, it predicted. Fellow Korean chipmaker Hynix said on Thursday it planned 2.3 trillion won ($2.2 billion) in capital expenditure next year.


SEOUL (Reuters) - LG Electronics Inc on Thursday launched liquid crystal display (LCD) televisions that use light-emitting diodes (LED) as a light source, and said it was aiming to sell as many as 5 million units in 2010.

The South Korean flat-screen TV maker also reiterated its previously stated goal of selling 18 million LCD sets in 2009, to become the world's second-largest maker of LCD televisions after domestic rival Samsung Electronics and ahead of Japan's Sony.

Despite the global economic downturn that has devastated the consumer electronics sector worldwide, makers of flat-screen televisions have managed to weather the slowdown better than other industries.

LG and Samsung are betting on LED-backlit LCD televisions as future profit drivers in their display divisions. LED televisions are about a third thinner than older models using cold cathode fluorescent lamps (CCFLs), have longer lifespans and offer more vivid images, with greater contrast and color range.

Simon Kang, chief executive of LG's home entertainment division, said the company was on track to achieve its earlier stated target of 18 million LCD televisions, adding that internal targets could be "stretched" even more.

LG said LED televisions, which make up only 2.6 percent of the total LCD set market, could represent as much as 20 percent of the market in 2010 and 40 percent of the total market in 2011.

The popularity of the new models will depend heavily on consumer acceptance of a significant price premium, which is anywhere from 50 percent to 70 percent over traditional LCD TVs. But LG expects the price differential to narrow rapidly to about 40-50 percent in 2010 and closer to parity in 2011.

LCD TVs make up about 60 percent of global TV shipments, according to research group DisplaySearch. LED TV sales are expected to grow more than tenfold this year to about 2 million units out of total LCD TV sales of 120 million.

The popularity of LED backlit LCDs is surging in the notebook and desktop computer markets, where DisplaySearch expects a penetration rate of 25 percent by the fourth quarter of this year, from about 12 percent in the first quarter.

Shares in LG Electronics rose 2.18 percent to 117,000 won on Thursday, in line with the broader market's 2.12 percent gain.

Swedish Performing Rights Society plays up the results, but admits was compiled using data obtained from ad respondents and not per a statistical group model.

The Swedish Performing Rights Society (STIM) has released a study titled “Pirates, File-Sharers and Music Users”(.pdf) that claims 86.2% of online music fans would pay a monthly subscription fee in exchange for legalized file-sharing.

Only 5.2% said they wouldn’t be interested.

When asked how much they would be willing to pay, 51.8% said between SEK 50 ($5.84 USD) and SEK 150 ($17.53) per month. Some 18.8% would consider paying between SEK 150 and SEK 300 ($35.08), and 21.7% would pay less than SEK 50 ($5.84) per month.

This sounds cool but the plan has its drawbacks. What’s to say other copyright holder groups like the movie or gaming industry don’t come along and demand their own monthly subscription fee?

(Source: Zeropaid)

The Windows 7 Release Candidate has been released and is available at the Microsoft Partner Program site which contains a short post and a download button for the release candidate. It does not look like it is offered to all subscribers at the moment as some report that they only see beta downloads and not the RC in the download list. Could be that it is only available to selected partners at this time. It is however very likely that all partners will be able to download the release candidate soon. Both MSDN and Technet subscribers will be able to download the RC prior to its public release.

Interesting for mere mortals is the release date of the Windows 7 Release Candidate which has been set to be May 5.

As the next-generation operating system from Microsoft, Windows 7 opens new development, sales, and services opportunities for your business. With Windows 7, you can offer your customers a robust foundation for high-quality experiences across applications, services, PCs, and devices.
We are pleased to announce that Windows 7 Release Candidate (RC) is available. Windows 7 RC is the prerelease version of Windows 7. Since this is not the final release, your PC will gather and send information to Microsoft engineers to help them check the fixes and changes made based on testing of Windows 7 Beta.
Test-drive Windows 7 RC today to see for yourself?and to show your colleagues and customers?how Windows 7 delivers improved management, security, reliability, and performance.
Download Windows 7 RC
Partners: If you have a subscription to MSDN or TechNet, you can download Windows 7 RC now. Otherwise, you can download Windows 7 RC starting May 5, 2009.

And now we wait for the leaks... :) i''ll keep you guys updated

Phishing scams have grown up from the unsophisticated swindles of the past in which fake Nigerian princes e-mailed victims, who would get a big windfall if they just provide their bank account number.

Even as authorities try to stamp out that con and other e-mail and online scams, scammers are getting more wily and finding new loopholes to exploit.

The vast majority of e-mail is spam and an unknown percentage of that is meant to defraud. The scale of electronic fraud means that that the criminals can make huge profits even if only a small percentage of people are duped.

Phishing commonly refers to hoax e-mails purportedly from banks or other trustworthy sources that seek to trick recipients into revealing bank or credit card account numbers and passwords.

The U.S. government scored a big victory in November when the web hosting company McColo Corp. was taken offline. Estimates vary, but the Washington Post said that 75 percent of spam worldwide had been sent through that single company.

But the spam e-mails offering celebrity diets, cheap printer ink, erased credit card debt and amazing orgasms quickly found a new way to inboxes, according to Google's security subsidiary Postini.

Now spammers use a variety of computers to send out spam e-mails to obscure their origins, meaning that a dramatic McColo-style takedown will be harder to reproduce, said Adam Swidler, product marketing manager for Google's Postini.

And they've largely abandoned scams that are easy to see through -- like the Nigerian prince -- in favor of more sophisticated "location-based spam," which directs the victim to a Web site discussing a local disaster or similar issue. If they click on the offered video, the Web site downloads a virus to the user's computer, Google said in a blog on security.

Tim Cranton, a Microsoft cybersecurity expert, said there was no way to know how much money is stolen. "We don't have a way to estimate numbers because there are so many victims that you're not aware of," he said.

WHAT IS 'SMISHING'?

New technology means new ways to steal. One of the latest is "smishing," which is nothing more than a phishing fraud sent via SMS text messaging.

E-con artists are getting more sophisticated in approaching potential victims. One tactic has been to write spam that purports to come from a trusted source, like Paypal.

When Paypal, which is owned by eBay, learned that spammers were using its name, they put a digital signature on their e-mails and asked providers like Yahoo and Google to block any e-mail purporting to come from them which did not have that signature.

"We know how many they throw away and it's approximately speaking about 10 million a month," said Michael Barrett, Paypal's chief information security officer. "If the consumer never sees the e-mail in the first place then it's hard for them to get victimized."

"Phishing was not just impacting consumers, in terms of general loss, it was impacting their view of the safety of the Internet and that it was indirectly damaging our brand," added Barrett.

Security experts say they are seeing more and more shifts from outright fraud, where the victim will hand over their money, to the use of malware, basically malicious software which, among other things, collects passwords and credit card numbers for thieves.

"Those will then be sold on the underground market," said David Marcus, a threat research expert at McAfee computer security firm.

The person purchasing the passwords and card numbers will use that information to make purchases, get cash or create fake identities.

The Federal Bureau of Investigation, working with police in the United Kingdom, Turkey and Germany, shut down one such online forum called Dark Market in October 2008 which, at its peak, had more than 2,500 registered members, according an FBI press release issued at the time.

But experts agreed that they didn't expect the problem to go away anytime soon, and that more people out of work could well mean more people like to fall for scams.

Marcus said many of the scams were nothing more than the digital equivalent of confidence tricks, although on a massive scale that can net some scammers more than $100,000 a month.

"These things only have to be 2 percent successful," he said. "Those campaigns are sent out to tens of millions of people at the same time.

Last month, Microsoft added 55TB of imagery to Virtual Earth, but this month's number isn't so high. Virtual Earth's latest imagery release is 21TB worth of data, according to Virtual Earth, An Evangelist's Blog. This is one of the smallest updates we've seen so far for Virtual Earth, but nevertheless, it's worth mentioning what is new.

While the size of the update isn't as large as it usually is, the number of countries affected is still quite long. Last month's release focused on Obliques (Bird's Eye View) but this month is all about Orthos images for the following countries: Albania, Australia, Bahamas, Belize, Bosnia and Herzegovina, Botswana, Brazil, Bulgaria, Canada, Cape Verde, Central African Republic, Chad, China, Comoros, Costa Rica, Côte d'Ivoire, Croatia, Cuba, Democratic Republic of Congo, Dominican Republic, Eritrea, Estonia, France, Gabon, Great Britain, Greece, Guinea-Bissau, Hungary, India, Japan, Kenya, Latvia, Liberia, Lithuania, Luxembourg, Madagascar, Malawi, Mauritania, Mauritius, Mexico, Moldova, Montenegro, Namibia, Norway, Panama, Poland, Portugal, Republic of Djibouti, Republic of Macedonia, Romania, Russia, Serbia, Slovakia, South Africa, Spain, Seychelles, Tanzania, Turkey, Ukraine, the US, Western Sahara, Zambia, and Zimbabwe.

To check out all the changes, head over to maps.live.com, choose 3D, and install the latest Virtual Earth 3D (Beta) software.

Since i talked about the new drivers made for Windows 7 in the previous post, let me point out the main feature of it which is the Windows Display Driver Model (WDDM v1.1). Vista and XP used version 1.0 of the driver.

What's the big difference you ask from such a small revision number? Answer: A LOT!

So what do you get with WDDM 1.1? For starters, the DWM (Desktop Window Manager) will use DirectX 10 instead of 9 and offer some nice performance enhancements. DWM will now use the same consistent amount of system memory no matter how many windows are open. Your graphics card memory usage will still increase as you open more windows, but that amount of memory is cut in half relative to Vista.

The end result is that Windows 7 will use dramatically less memory as you open additional windows on your desktop. It should also be faster and more efficient at rendering the desktop.

Windows 7 will introduce Direct2D, a sort of accelerated 2D replacement for GDI used for drawing 2D graphics, lines, splines, etc. It runs on top of Direct3D 10 and requires WDDM 1.1 drivers. This has been long awaited, and future applications should see snappier windows performance in standard applications, not just 3D apps.

DirectX 10 games should see a performance improvement with WDDM 1.1 drivers, mostly centered around memory management. It's still far too early for Microsoft to share any sort of numbers, but it seems to suggest that the performance difference will vary by title.

Don't you hate it when a media application uses the video overlay function of your video card in Vista, and there's this annoying screen flash as you lose the Aero glass interface? Then there's another flash as Aero is restored when you shut the program down. With Windows 7 and WDDM 1.1 drivers, that shouldn't happen anymore.

If you use a projector that has a 4:3 aspect ratio with a laptop or desktop PC with a widescreen monitor, you probably feel some frustration about scaling problems. WDDM 1.1 drivers will enable some new scaling modes, making your widescreen display show the same aspect ratio as your projector.

If you have WDDM 1.0 driver, the desktop will still display Aero, running on what Microsoft is calling "10 level 9." This uses the same DirectX 10 calls for the DWM to render the desktop, but translated to a subset of those functions and send to the DX9 drivers of these older cards. Don't expect game developers or anyone to really use these functions—it's simply a desktop rendering solution Microsoft has come up with to support older hardware.

The good news is, pretty much all DX10 capable graphics cards and integrated graphics chipsets should have WDDM 1.1 drivers, and with a year (or so) to go before Windows 7 hits the market, those cards should be commonplace

(source: ExtremeTech)

Well first of all i have to warn everyone that there is a small risk of attempting to recharge a non-rechargeable battery: possible leaks (which are highly toxic or even burn you charging device).



Now that the "don't blame me if you arson your house by accident" part is over we can continue:

Taking a normal AA or AAA battery and putting it in a normal battery recharger is the "hardest" part of the experiment. The slow recharge rate is preferred (around 64-100 mAh) because the slower the recharge the better chances it wont leak. Anyway it's reccomended that you don't leave the charger unsupervised for long periods of time (over 2 hours) and unplug it for 10minutes if the batteries are getting warm.

The small standard batteries AA and AAA are no problem to fit in any charger but what if it doesn't support 5v/9V/12v batteries? Well then we need something else to tranfer a steady supply of curent through them. I guess everyone uses a device called a Transformer to recharge their mobile phone/mp4/iPod etc.

On the top side lies a little sticker with a few specs. Most important is the output. For charging a 9V battery any transformer with an output between 9-12V is good but the max mA(h) (micro amperes per hour) should be kept under 400 for safety reasons cause the battery may get too hot, leak and ruin your beautiful table (or wherever you placed it). As you can see below the Voltage is good (12v) but the Amperage is way to high (1670mA) which would cause it to leak in less than 30minutes. I don't recommend using your phones charger or any you actually need in top condition because there is a a very small risk to burn it. Use one that lays around in a drawer somewhere from some old appliance. The wiring may be tricky and not too advanced but it does it job. Use wiring to connnect the inner hole to the "+" sign on the battery and the outer metal layer to the "-" sign (dont ask how to keep them in place, be creative or use ductape like i do hehe).

So let's plug it in and charge it in small periods of time (1-2 hours) a few times (4-5 times). The battery should be in a working state and around 75% of the original charge (if you have a multimeter check the voltage on it).

So i did my own experiment using 2xAA Alkaline Energizer batteries. They were used in my wireless mouse until the mouse died (with a charge around 0.50v and 3mA they were heading for the trash bin). I put them in the charger and left them in for 2 hours with a 5 minute break. I repeated this cicle 5 times. At the end i left them on the table for an hour to self stabilize the current. When i checked them with the multimeter i was pretty impressed. A voltage of 1.32V and around 135mA (the standard batteries i buy have 175mA). I am using them right now in my wireless mouse and they work just like other battery. I will have to see if they last on the long run (2-3weeks) because i usually change them once a month.

The warnings on the battery labels are a bit exagerrated because the battery is very unlikely to catch on fire, maybe just leak (the liquid is very corrosive and may damage the device) if recharged.

MY TESTS:

2x AA Energizer 0.5v @ 3mA (before charge) -> 1.32v @ 135mA

1x 9V Toshiba 0.29v @ 10mA to (before charge) -> 8.22v @ 320mA

The most important things you have to remember are:
- outputs of max 180mA can be unsupervised for max 4 hours in my experience (but the quality of the battery is important too to prevent leaks)
- outputs over 200mA (max 500mA)can be unsupervised for around an hour or two (just check if its too warm, if not it should be ok)
- don't let the battery get hot (unplug it and let it rest 30minutes) and retry
- don't try recharging extreamely cheap batteries or those which are not alkaline (the quality is doubtfull and may leak within 20-30minutes)

Well it's REALITY, and i will keep you updated on how they are performing...

I guess many of you are google-ing all day for tweaks to bring DirectX 10 on XP and are not sure what to do considering the massive contradictory information. The interesting thing is some russian site made an alpha "patch" to add the functionality to XP. But does it actually work?
After downloading the patch, i installed it on a virtual machine to see if it will even boot afterwards.
Well, to my surprise it did boot normal and the DirectX Diagnostic Tool (dxdiag.exe) "confirmed" it's installed. This would be enough proof for the majority of the people but not for a geek like me. That little "change" is easily done by modifing a file and a bit of registry tweaking. So that still doesn't convince me. The patch could actually work with the SDK's (Software Development Kit) DirectX 10 only and not suitable for games. It makes the system think you have Dx10 but no actual use can be achieved (the extra features are not enabled in anything).

Microsoft did annnounce there is no compatibility between XP and Dx10 because of changes in the Windows Display Driver Model (WDDM) and the new audio driver stack plus other updates in the operating system.

Will DirectX 10 be available for Windows XP?

No. Windows Vista, which has DirectX 10, includes an updated DirectX runtime based on the runtime in Windows XP SP2 (DirectX 9.0c) with changes to work with the new Windows Display Driver Model (WDDM) and the new audio driver stack, and with other updates in the operating system. In addition to Direct3D 9, Windows Vista supports two new interfaces when the correct video hardware and drivers are present: Direct3D9Ex and Direct3D10."

Since these new interfaces rely on the WDDM technology, they will never be available on earlier versions of Windows. All the other changes made to DirectX technologies for Windows Vista are also specific to the new version of Windows. The name DirectX 10 is misleading in that many technologies shipping in the DirectX SDK (XACT, XINPUT, D3DX) are not encompassed by this version number. So, referring to the version number of the DirectX runtime as a whole has lost much of its meaning, even for 9.0c. The DirectX Diagnostic Tool (DXdiag.exe) on Windows Vista does report DirectX 10, but this really only refers to Direct3D 10.

Since these new interfaces rely on the WDDM technology, they will never be available on earlier versions of Windows. All the other changes made to DirectX technologies for Windows Vista are also specific to the new version of Windows. Even if it's displayed as Dx10 that doesn't mean you actually have Direct3D 10 which is needed for 100% DX10. Anyway, let's see if a DX10 game will start under these conditions because if it will that will prove i'm wrong.

Well what a big surprise ... i was actually 100% sure that it would not work. Going a little offtopic i want to ask you this:
Why would someone with a decent computer (considering he wants Dx10) would still use xp after 7 years from it's release when Windows 7 is just around the corner (yes i admit Vista kinda sucks) and will be what Vista should have been. Sure it eats more RAM, but let's face it... systems are built with a minimum of 2GB nowadays and 4GB systems are really common. As i discussed this in an earlier post, whats the point of having loads of memory if it's gonna be unused (as in xp). Vista at least occupies it with the most used programs which helps the starting time of those applications.

Back on topic, Windows XP DX10 is a myth and does NOT work as it should so i consider it BUSTED!


Disabling QoS to Free Up 20% of Bandwidth

This tip made the rounds with people believing that Microsoft always allocates 20% of your bandwidth for Windows Update. According to the instructions, you were supposed to disable QoS in order to free up bandwidth. Unfortunately this tip was not only wrong, but disabling QoS will cause problems with applications that rely on it, like some streaming media or VoIP applications.

Rather than taking my word for it, you can read the official Microsoft response: "There have been claims in various published technical articles and newsgroup postings that Windows XP always reserves 20 percent of the available bandwidth for QoS. These claims are incorrect... One hundred percent of the network bandwidth is available to be shared by all programs unless a program specifically requests priority bandwidth."

Make Vista Use Multiple Cores to Speed Up Boot Time

boot_option.jpgThis bogus tip made the rounds recently and almost everybody got caught including Lifehacker and big brother site Gizmodo... although commenters called it out quickly on both sides, and the editors updated the posts. (That's yet another reason to always participate in the comments here.)

According to this tip, you were supposed to use MS Config to modify the "Number of processors" drop-down on the Boot tab. The problem is that this setting is only used for troubleshooting and debugging, to be able to determine if there is a problem with a single processor, or for a programmer to test their code against a single core while running on a multi-core system. Windows will use all your processors by default without this setting.

Clearing Out Windows Prefetch for Faster Startup

The Prefetch feature in Windows XP caches parts of applications that you frequently use and tries to optimize the loading process to speed up application start time, so when a number of sites started suggesting that you clean it out regularly to speed up boot time it seemed like good advice... but sadly that's not the case, as pointed out by many Lifehacker commenters.

The Prefetch feature is actually used as a sort of index, to tell Windows which parts of an application should be loaded into memory in which order to speed up application load time, but Windows doesn't use the information unless it's actually starting an application. There's also a limit of 128 files that can be stored in the prefetch folder at any point, and Windows cleans out the folder automatically, removing information for applications that haven't been run as frequently. Not only that, but a well-written defrag utility will use the prefetch information to optimize the position of the files on the disk, speeding up access even further.

Cleaning the Registry Improves Performance


The Windows registry is a massive database of almost every setting imaginable for every application on your system. It only makes sense that cleaning it out would improve performance, right? Sadly it's just a marketing gimmick designed to sell registry cleaner products, as the reality is quite different... registry cleaners only remove a very small number of unused keys, which won't help performance when you consider the hundreds of thousands of keys in the registry.

Clear Memory by Processing Idle Tasks

By this point you should be starting to get the picture... if something sounds too good to be true, it likely is. This well-traveled tip usually claims that you can create an "undocumented" shortcut to Rundll32.exe advapi32.dll,ProcessIdleTasks that will clear out memory by processing all of the idle tasks wasting memory in the background.

What's the problem? Those idle tasks aren't actually waiting in the background... what you are effectively doing is telling the computer that you've walked away so it can now do other processing while you are idle. Except you aren't. The real purpose of this functionality is to finish all processing before running benchmarks to ensure consistent times, and according to the Microsoft documentation there's a whole different story

Clean, Defrag and Boost Your RAM With SnakeOil Memory Optimizer

Just take a quick look at any download site, and you'll find hundreds of products that claim to "optimize RAM to make your computer run faster". Give me a break! Almost all of these products do the same things: they call a Windows API function that forces applications to write out their memory to the pagefile, or they allocate and then deallocate a ton of memory quickly so that Windows will be forced to page everything else.

Both of the techniques make it appear that you've suddenly freed up memory, when in reality all you've done is trade in your blazing fast RAM for a much slower hard drive. Once you have to switch back to an application that has been moved to the pagefile, it'll be so slow you'll be likely to go all Office Space on your machine.

Disabling Shadow Copy/System Restore Improves Performance

I've barely come across a Windows Vista tips site that doesn't tell you to disable System Restore to speed up performance, because it takes up to 15% of your hard drive by default, which sounds like good advice. Except it's not.

The reality is that System Restore only actually kicks in when you are installing updates or applications, or at pre-scheduled times in the day, and the automatic checkpoints will only happen when your computer is not being used. These checkpoints allow you to easily roll back your system to a pre-crash state, and I can tell you from experience that System Restore is a critical feature when your Vista machine has problems, allowing you to easily get back to a working state.

Enable SuperFetch in Windows XP

Somebody decided to start spreading the myth that you could enable SuperFetch in Windows XP by adding the same EnableSuperfetch key into the registry that Windows Vista has, and it spread like wildfire. Naturally, this tip was completely bogus.

The good news is that this tip is one of the few that will not harm your system in any way, as long as you don't break something while editing the registry. If you insist on using it, I won't complain.

Disabling Services to Speed Up the Computer

Perhaps the most common myth is the advice to disable all services that you aren't using. I realize this will generate some controversy, so let me clarify: Disabling non-essential services that are NOT part of Windows will sometimes yield a performance gain if you have identified those services as causing a problem. You can identify or disable those services by opening msconfig.exe and checking the box for "Hide all Microsoft services" on the Services tab:

The problem with disabling services is that your devices will often not work once you do: for instance, I disabled the "Unknown" dlbt_device service in the list above, and could no longer print to my Dell printer... disabling the VMware services made VMware unable to run, and so forth.

You should be even more careful to not disable built-in Microsoft services in Windows, except for a select few under certain circumstances:

  • SuperFetch—This caching service preloads applications into memory, and actually does work. The problem is that it can cause your hard drive to do a lot of grinding while it's working, which is especially irritating on a laptop.
  • Windows Search—If you don't use the Vista search or you use an alternate desktop search engine, you really don't need this service and can increase performance quite a bit by disabling it.
  • Windows Defender - If you are already using another anti-malware product, you really don't need this running as well.

Enabling AlwaysUnloadDLL frees up more memory and improves performance

Reality - "Adding this Registry Key in Windows 2000 or XP has no effect since this registry key is no longer supported in Microsoft Windows 2000 or later. The Shell automatically unloads a DLL when its usage count is zero, but only after the DLL has not been used for a period of time. This inactive period might be unacceptably long at times, especially when a Shell extension DLL is being debugged. For operating systems prior to Windows 2000, you can shorten the inactive period by adding this registry key."


"Adding ConservativeSwapfileUsage=1 to the System.ini file improves performance."

Reality - "The System.ini and Win.ini files are provided in Windows XP for backward compatibility with 16-bit applications (MS-DOS-based programs). They have no effect on the Windows XP paging file settings which are stored in the Registry. This setting only effects Windows 95/98 operating systems. The default setting for ConservativeSwapfileUsage is 1 for Windows 95, and 0 (zero) for Windows 98. On Windows 98 systems you can set ConservativeSwapfileUsage=1 under the [386Enh] heading of the System.ini file causing the system to behave as Windows 95 does, at some cost in overall system performance."


"Setting DisablePagingExecutive to 1 improves performance by preventing the kernel from paging to disk."

Reality - "DisablePagingExecutive applies only to ntoskrnl.exe. It does not apply to win32k.sys (much larger than ntoskrnl.exe!), the pageable portions of other drivers, the paged pool and of course the file system cache. All of which live in kernel address space and are paged to disk. On low memory systems this can force application code to be needlessly paged and reduce performance. If you have more than enough RAM for your workload, yes, this won't hurt, but then again, if you have more than enough RAM for your workload, the system isn't paging very much of that stuff anyway. This setting is useful when debugging drivers and generally recommended for use only on servers running a limited well-known set of applications."


"The built-in Disk Defragmenter is good enough."

Reality - "This statement would be true if the built-in defragmenter was fast, automatic, and customizable. Unfortunately, the built-in defragmenter does not have any of these features. The built-in defragmenter takes many minutes to hours to run. It requires that you keep track of fragmentation levels, you determine when performance has gotten so bad you have to do something about it, and then you manually defragment each drive using the built-in defragmentation tool."


"Adding IRQ14=4096 to the System.ini file improves performance."

Reality - "This is a made up nonexistent command that does absolutely nothing. The System.ini and Win.ini files are provided in Windows XP for backward compatibility with 16-bit applications (MS-DOS-based programs). They have no effect on any Windows XP settings or 32-bit applications which are stored in the Registry."


"Adjusting the Priority of IRQs especially IRQ 8 improves system performance."

Reality - "IRQs don't even HAVE a concept of "priority" in the NT family; they do have something called "IRQL" (interrupt request level) associated with them. But the interval timer interrupt is already assigned a higher IRQL than any I/O devices, second only to the inter-processor interrupt used in an MP machine. The NT family of OSes don't even use the real-time clock (IRQ 8) for time keeping in the first place! They use programmable interval timer (8254, on IRQ 0) for driving system time keeping, CPU time accounting, and so on. IRQ 8 is used for profiling, but profiling is almost never turned on except in very rare development environments. Even if it was possible it doesn't even make sense why adjusting the real-time clock priority would boost performance? The real-time clock is associated with time keeping not CPU frequency. I would not be surprised if this originated in an overclocking forum somewhere. This "tweak" can be found in most XP all-in-one tweaking applications. This is a perfect example of why they are not recommended."


"Enabling LargeSystemCache improves desktop/workstation performance."

Reality - "LargeSystemCache determines whether the system maintains a standard size or a large size file system cache, and influences how often the system writes changed pages to disk. Increasing the size of the file system cache generally improves file server performance, but it reduces the physical memory space available to applications and services. Similarly, writing system data less frequently minimizes use of the disk subsystem, but the changed pages occupy memory that might otherwise be used by applications. On workstations this increases paging and causes longer delays whenever you start a new app. Simply put enable this on a file server and disable it on everything else."

Notes - "System cache mode is designed for use with Windows server products that act as servers. System cache mode is also designed for limited use with Windows XP, when you use Windows XP as a file server. This mode is not designed for everyday desktop use. When you enable System cache mode on a computer that uses Unified Memory Architecture (UMA)-based video hardware or an Accelerated Graphics Port (AGP), you may experience a severe and random decrease in performance. For example, this decrease in performance can include very slow system performance, stop errors, an inability to start the computer, devices or applications that do not load, and system instability. The drivers for these components consume a large part of the remaining application memory when they are initialized during startup. Also, in this scenario, the system may have insufficient RAM when the following conditions occur:

- Other drivers and desktop user services request additional resources.
- Desktop users transfer large files.

By default LargeSystemCache is disabled in Microsoft Windows XP.


"Disabling the Paging File improves performance."

Reality - "You gain no performance improvement by turning off the Paging File. When certain applications start, they allocate a huge amount of memory (hundreds of megabytes typically set aside in virtual memory) even though they might not use it. If no paging file (pagefile.sys) is present, a memory-hogging application can quickly use a large chunk of RAM. Even worse, just a few such programs can bring a machine loaded with memory to a halt. Some applications (e.g., Adobe Photoshop) will display warnings on startup if no paging file is present."

Notes - "In modern operating systems, including Windows, application programs and many system processes always reference memory using virtual memory addresses which are automatically translated to real (RAM) addresses by the hardware. Only core parts of the operating system kernel bypass this address translation and use real memory addresses directly. All processes (e.g. application executables) running under 32 bit Windows gets virtual memory addresses (a Virtual Address Space) going from 0 to 4,294,967,295 (2*32-1 = 4 GB), no matter how much RAM is actually installed on the computer. In the default Windows OS configuration, 2 GB of this virtual address space are designated for each process' private use and the other 2 GB are shared between all processes and the operating system. RAM is a limited resource, whereas virtual memory is, for most practical purposes, unlimited. There can be a large number of processes each with its own 2 GB of private virtual address space. When the memory in use by all the existing processes exceeds the amount of RAM available, the operating system will move pages (4 KB pieces) of one or more virtual address spaces to the computer's hard disk, thus freeing that RAM frame for other uses. In Windows systems, these "paged out" pages are stored in one or more files called pagefile.sys in the root of a partition. Virtual Memory is always in use, even when the memory required by all running processes does not exceed the amount of RAM installed on the system."


Hope that cleared up some of the usual myths. Also the XP fanboys who think having 90% of the ram free is good... i have news for you... it sucks. All aplications will reload themselves everytime you execute them. By letting Windows use all the available RAM it will automatically store the most used applications so they can start almost instantly. Why have 2/4/8 gb of ram if it's gonna be empty all the time? That's why it's called memory... to actually have something on it to be usefull. Don't worry windows will free the memory if you start something else which it didn't precache already.


“Research has shown that it's not necessarily the time pressure, but it's the perception of that time pressure that affects you. If you feel you don't have enough time to do something, it's going to affect you,” Case Western Reserve University psychology doctoral student Michael DeDonno explains. He has been the leader of the current study, published in the December issue of the journal Judgment and Decision Making, which has examined the cases of 163 test subjects who have taken part in a game called the Iowa Gambling Task (IGT), popular among psychologists.

In this game, participants are told they have to fulfill a task, and are then separated into two groups. One group is informed that it has very little time to complete the assignment, while the other is communicated that it has sufficient time to execute all the demands of the exercise. In reality, both groups are given the same time-frame to accomplish their objectives. The researchers have noted that the people in the group that was told it had not much time were far more likely to make mistakes and work in a sloppier manner than those in the control group.

“If I told you that you didn't have enough time, your performance was low regardless if you had ample time or not. If you were told you had enough time, in both scenarios, they out performed those who were told they didn't,” DeDonno adds. “Decision-making can be emotion-based, keep your emotions in check. Have confidence in the amount of time you do have to do things. Try to focus on the task and not the time. We don't control time, but we can control our perception. It's amazing what you can do with a limited amount of time. Time is relevant. Just have the confidence with the time you're given. I tell my students 'Do the best you can in the time allotted. When it ends, it ends.'”


"What is DirectX?"

Microsoft DirectX is a collection of application programming interfaces (APIs) for handling tasks related to multimedia, especially game programming and video, on Microsoft platforms. Originally, the names of these APIs all began with Direct, such as Direct3D, DirectDraw, DirectMusic, DirectPlay, DirectSound, and so forth. DirectX, then, was the generic term for all of these APIs and became the name of the collection. After the introduction of the Xbox, Microsoft has also released multiplatform game development APIs such as XInput, which are designed to supplement or replace individual DirectX components.

Direct3D (the 3D graphics API within DirectX) is widely used in the development of video games for Microsoft Windows, Microsoft Xbox, and Microsoft Xbox 360. Direct3D is also used by other software applications for visualization and graphics tasks such as CAD/CAM engineering. As Direct3D is the most widely publicized component of DirectX, it is common to see the names "DirectX" and "Direct3D" used interchangeably.

The DirectX software development kit (SDK) consists of runtime libraries in redistributable binary form, along with accompanying documentation and headers for use in coding. Originally, the runtimes were only installed by games or explicitly by the user. Windows 95 did not launch with DirectX, but DirectX was included with Windows 95 OEM Service Release 2.[1] Windows 98 and Windows NT 4.0 both shipped with DirectX, as has every version of Windows released since. The SDK is available as a free download. While the runtimes are proprietary, closed-source software, source code is provided for most of the SDK samples.

The latest versions of Direct3D, namely, Direct3D 10 and Direct3D 9Ex, are only officially available for Windows Vista, because each of these new versions was built to depend upon the new Windows Display Driver Model that was introduced for Windows Vista. The new Vista/WDDM graphics architecture includes a new video memory manager that supports virtualizing graphics hardware to multiple applications and services such as the Desktop Window Manager


Several components are needed in DirectX

DirectX 10 was introduced with Windows Vista exclusively because previous versions of Windows such as Windows XP are not able to officially run DirectX 10-exclusive applications.

DirectX 10.1 is an incremental update of DirectX 10 which is shipped with, and requires, Windows Vista Service Pack 1.[8] This release mainly sets a few more image quality standards for graphics vendors, while giving developers more control over image quality. It also adds support for parallel cube mapping and requires that the video card supports Shader Model 4.1 or higher and 32-bit floating-point operations. Direct3D 10.1 still fully supports Direct3D 10 hardware, but in order to utilize all of the new features, updated hardware is required.

DirectX 11 was unveiled at the Gamefest 08 event in Seattle, with the major scheduled features including GPGPU support, tessellation[11][12] support, and improved multi-threading support to assist video game developers in developing games that better utilize multi-core processors. Direct3D 11 will run on Windows Vista and its successor Windows 7. Parts of the new API such as multi-threaded resource handling can be supported on Direct3D 9/10/10.1-class hardware. Hardware tessellation and Shader Model 5.0 will require Direct3D 11 supporting hardware.

This is the DirectX 10 pipeline

And this is DirectX 11


Many of the enhancements mean higher performance for features already available in DX10 but less used. Tessellation (made up of the hull shader, tessellator and domain shader) and the Compute Shader are major developments that could close the gap between reality and unreality.

Along with the pipeline changes, we see a whole host of new tweaks and adjustments. DirectX 11 is actually a strict superset of DirectX 10.1, meaning that all of those features are completely encapsulated in and unchanged by DirectX 11. This simple fact means that all DX11 hardware will include the changes required to be DX 10.1 compliant and in addition to these tweaks, there are also new extensions:

While changes in the pipeline allow developers to write programs to accomplish different types of tasks, these more subtle changes allow those programs to be more complex, higher quality, and/or higher performance.

DX11 And The Multi-Threaded Game Engine

In spite of the fact that multi-threaded programming has been around for decades, mainstream programmers didn't start focusing on parallel programming until multi-core CPUs started coming along. Much general purpose code is straightforward as a single thread; extracting performance via parallel programming can be difficult and isn't always obvious. Even with talented programmers, Amdahl's Law is a bitch: your speed up from parallelization is limited by the percent of code that is necessarily sequential.

No matter what anyone does, some stuff in the renderer will need to be sequential. Programs, textures, and resources must be loaded up; geometry happens before pixel processing; draw calls intended to be executed while a certain state is active must have that state set first and not changed until completion. Even in such a massively parallel machine, order must be maintained for many things. But order doesn't always matter.

Making more things thread-safe through an extended device interface using multiple contexts and making a lot of synchronization overhead the responsibility of the API and/or graphics driver, Microsoft has enabled game developers to more easily and effortlessly thread not only their rendering code, but their game code as well. These things will also work on DX10 hardware running on a system with DX11, though some missing hardware optimizations will reduce the performance benefit. But the fundamental ability to write code differently will go a long way to getting programmers more used to and better at parallelization. Let's take a look at the tools available to accomplish this in DX11.

First up is free threaded asynchronous resource loading. That's a bit of a mouthful, but this feature gives developers the ability to upload programs, textures, state objects, and all resources in a thread-safe way and, if desired, concurrent with the rendering process. This doesn't mean that all this stuff will get pushed up in parallel with rendering, as the driver will manage what gets sent to the GPU and when based on priority, but it does mean the developer no longer has to think about synchronizing or manually prioritizing resource loading. Multiple threads can start loading whatever resources they need whenever they need them. The fact that this can also be done concurrently with rendering could improve performance for games that stream in data for massive open worlds in addition to enabling multi-threaded opportunities.

In order to enable this and other threading, the D3D device interface is now split into three separate interfaces: the Device, the Immediate Context, and the Deferred Context. Resource creation is done through the Device. The Immediate Context is the interface for setting device state, draw calls, and queries. There can only be one Device and one Immediate Context. The Deferred Context is another interface for state and draw calls, but many can exist in one program and can be used as the per-thread interface (Deferred Contexts themselves are thread unsafe though). Deferred Contexts and the free threaded resource creation through the device are where DX11 gets it multi-threaded benefit.

Multiple threads submit state and draw calls to their Deferred Context which complies a display list that is eventually executed by the Immediate Context. Games will still need a render thread, and this thread will use the Immediate Context to execute state and draw calls and to consume the display lists generated by Deferred Contexts. In this way, the ultimate destination of all state and draw calls is the Immediate Context, but fine grained synchronization is handled by the API and the display driver so that parallel threads can be better used to contribute to the rendering process. Some limitations on Deferred Contexts include the fact that they cannot query the device and they can't download or read back anything from the GPU. Deferred Contexts can, however, consume the display lists generated by other Deferred Contexts.

The end result of all this is that the future will be more parallel friendly. As two and four core CPUs become more and more popular and 8 and 16 (logical) core CPUs are on the horizon, we need all the help we can get when trying to extract performance from parallelism. This is a good move for DirectX and we hope it will help push game engines to more fully utilize more than two or even four cores when the time comes


The DX11 Compute Shader and OpenCL/OpenGL

Enter DirectX11 and the CS. Developers have the option to pass data structures over to the Compute Shader and run more general purpose algorithms on them. The Compute Shader, like the other fully programmable stages of the DX10 and DX11 pipeline, will share a single set of physical resources (shader processors).

This hardware will need to be a little more flexible than it currently is as when it runs CS code it will have to support random reads and writes and irregular arrays (rather than simple streams or fixed size 2D arrays), multiple outputs, direct invocation of individual or groups of threads as per the programmer's needs, 32k of shared register space and thread group management, atomic instructions, synchronization constructs, and the ability to perform unordered IO operations.

At the same time, the CS loses some features as well. As each thread is no longer treated as a pixel, so the association with geometry is lost (unless specifically passed in a data structure). This means that, although CS programs can still use texture samplers, automatic trilinear LOD calculations are not automatic (LOD must be specified). Additionally, depth culling, anti-aliasing, alpha blending, and other operations that have no meaning to generic data cannot be performed inside a CS program.

The type of new applications opened up by the CS are actually infinite, but the most immediate interest will come from game developers looking to augment their graphics engines with fancy techniques not possible in the Pixel Shader. Some of these applications include A-Buffer techniques to allow very high quality anti-aliasing and order independent transparency, more advanced deferred shading techniques, advanced post processing effects and convolution, FFTs (fast Fourier transforms) for frequency domain operations, and summed area tables.

Beyond the rendering specific applications, game developers may wish to do things like IK (inverse kinematics), physics, AI, and other traditionally CPU specific tasks on the GPU. Having this data on the GPU by performing calculations in the CS means that the data is more quickly available for use in rendering and some algorithms may be much faster on the GPU as well. It might even be an option to run things like AI or physics on both the GPU and the CPU if algorithms that always yield the same result on both types of processors can be found (which would essentially substitute compute power for bandwidth).

Even though the code will run on the same hardware, PS and CS code will perform very differently based on the algorithms being implemented. One of the interesting things to look at is exposure and histogram data often used in HDR rendering. Calculating this data in the PS requires several passes and tricks to take all the pixels and either bin them or average them. Despite the fact that sharing data is going to slow things down quite a bit, sharing data can be much faster than running many passes and this makes the CS an ideal stage for such algorithms.

So What's a Tessellator?

This has been covered before now in other articles about DirectX 11, but we first touched on the subject with the R600 launch. Both R6xx and R7xx hardware have tessellators, but since these are proprietary implementations, they won't be directly compatible with DirectX 11 which uses a much more sophisticated setup. While neither AMD nor the DX11 tessellator itself are programmable, DX11 includes programmable input to and output from the tesselator (TS) through two additional pipeline stages called the Hull Shader (HS) and the Domain Shader (DS).

The tessellator can take coarse shapes and break them up into smaller parts. It can also take these smaller parts and reshape them to form geometry that is much more complex and that more closely approximates reality. It can take a cube and turn it into a sphere with very little overhead and much fewer space requirements. Quality, performance and manageability benefit.

The Hull Shader takes in patches and control points out outputs data on how to configure the tessellator. Patches are a new primitive (like vertices and pixels) that define a segment of a plane to be tessellated. Control points are used to define the parametric shape of the desired surface (like a curve or something). If you've ever used the pen tool in Photoshop, then you know what control points are: these just apply to surfaces (patches) instead of lines. The Hull Shader uses the control points to determine how to set up the tessellator and then passes them forward to the Domain Shader.

The tessellator just tessellates: it breaks up patches fed to it by the Hull Shader based on the parameters set by the Hull shader per patch. It outputs a stream of points to the Domain Shader, which then needs to finish up the process. While programmers must write HS programs for their code, there isn't any programming required for the TS. It's just a fixed function block that processes input based on parameters.

The Domain Shader takes points generated by the tessellator and manipulates them to form the appropriate geometry based on control points and/or displacement maps. It performs this manipulation by running developer designed DS programs which can manipulate how the newly generated points are further shifted or displaced based on control points and textures. The Domain Shader, after processing a point, outputs a vertex. These vertices can be further processed by a Geometry Shader, which can also feed them back up to the Vertex Shader using stream out functionality. More likely than heading back up for a second pass, we will probably see most output of the Domain Shader head straight on to rasterization so that its geometry can be broken down into screen space fragments for Pixel Shader processing.

That covers what the basics of what the tesselator can do and how it does it. But do you find your self wondering: "self, can't the Geometry Shader just be used to create tessellated surfaces and move the resulting vertices around?" Well, you would be right. That is technically possible, but not practical at this point.

Tessellation: Because The GS Isn't Fast Enough

Microsoft and AMD tend to get the most excited about tessellation whenever the topic of DX11 comes up. AMD jumped on the tessellation bandwagon long ago, and perhaps it does make sense for consoles like the XBox 360. Adding fixed function hardware to quickly and efficiently handle a task that improves memory footprint has major advantages in the living room. We still aren't sold on the need for a tessellator on the desktop, but who's to argue with progress?

Or is it really progressive? The tessellator itself is fixed function rather than programmable. Sure, the input to and output of the tessellator can be manipulated a bit through the Hull Shader and Domain Shader, but the heart of the beast is just not that flexible. The Geometry Shader is the programmable block in the pipeline that is capable of tessellation as well as much more, but it just doesn't have the power to do tessellation on any useful scale. So while most everything has been moving towards programmability in the rendering pipe, we have sort of a step backward here. But why?

The argument between fixed function and programmable hardware is always one of performance versus flexibility and usefulness. In the beginning, fixed function was necessary to get the desired performance. As time went on, it became clear that adding in more fixed function hardware to graphics chips just wasn't feasible. The transistors put into specialized hardware just go unused if developers don't program to take advantage of it. This made a shift toward architectures where expanding the pool of compute resources that could be shared and used for many different tasks became a much more attractive way to go. In the general case anyway. But that doesn't mean that fixed function hardware doesn't have it's place.

We do still have the problem that all the transistors put into the tessellator are worthless unless developers take advantage of the hardware. But the reason it makes sense is that the ROI (return on investment: what you get for what you put in) on those transistors is huge if developers do take advantage of the hardware: it's much easier to get huge tessellation performance out of a fixed function tessellator than to put the necessary resources into the Geometry Shader to allow it to be capable of the same tessellation performance programmatically. This doesn't mean we'll start to see a renaissance of fixed function blocks in our graphics hardware; just that significantly advanced features going forward may still require the sacrifice of programability in favor of early adoption of a feature. The majority of tasks will continue to be enabled in a flexible programmable way, and in the future we may see more flexibility introduced into the tessellator until it becomes fully programmable as well (or ends up just being merged into some future version of the Geometry Shader).

Now don't let this technical assessment of fixed function tessellation make you think we aren't interested in reaping the benefits of the tessellator. Currently, artists need to create different versions of their objects for different LODs (Level of Detail -- reducing or increasing complexity as the object moves further or nearer the viewer), and geometry simulation through texturing at each LOD needs to be done by pixel shaders. This requires extra work from both artists and programmers and costs a good bit in terms of performance. There are also some effects than can only be done with more geometry.

Tessellation is a great way to get that geometry in there for more detail, shadowing, and smooth edges. High geometry also allows really cool displacement mapping effects. Currently, much geometry is simulated through textures and techniques like bump mapping or parallax occlusion mapping or some other technique. Even with high geometry, we will want to have large normal maps for our lighting algorithms to use, but we won't need to do so much work to make things like cracks, bumps, ridges, and small detail geometry appear to be there when it isn't because we can just tessellate and displace in a single pass through the pipeline. This is fast, efficient, and can produce very detailed effects while freeing up pixel shader resources for other uses. With tessellation, artists can create one sub division surface that can have a dynamic LOD free of charge; a simple hull shader and a displacement map applied in the domain shader will save a lot of work, increase quality, and improve performance quite a bit.

If developers adopt tessellation, we could see cool things, and with the move to DX11 class hardware both NVIDIA and AMD will be making parts with tessellation capability. But we may not see developers just start using tessellation (or the compute shader for that matter) right away. Because DirectX 11 will run on down level hardware and at the release of DX11 we will already have a huge number cards on the market capable of running a subset of DX11 bringing with it a better, more refined, programming language in the new version of HLSL and seamless parallelization optimizations, we will very likely see the first DX11 games only implementing features that can run completely on DX10 hardware.

Of course, at that point developers can be fully confident of exploiting all the aspects of DX10 hardware, which they still aren't completely taking advantage of. Many people still want and need a DX9 path because of Vista's failure, which means DX10 code tends to be more or less an enhanced DX9 path rather than something fundamentally different. So when DirectX 11 finally debuts, we will start to see what developers could really do with DX10.

Certainly there will be developers experimenting with tessellation, but these will probably just be simple amplification to get rid of those jagged edges around curved surfaces at first. It will take time for the real advanced tessellation techniques everyone is excited about to come to fruition.

The final bit of DX11 we'll touch on is the update to HLSL (MS's High Level Shader Language) in version 5.0 which brings some very developer friendly adjustments. While HLSL has always been similar in syntax to C, 5.0 adds support for classes and interfaces. We still don't get to use pointers though.

These changes are being made because of the sheer size of shader code. Programmers and artists need to build or generate either a single massive shader or tons of smaller shader programs for any given game. These code resources are huge and can be hard to manage without OOP (Object Oriented Programming) constructs. But there are some differences to how things work in other OOP languages. For instance, there is no need for memory management (because there are no pointers) or constructors / destructors in HLSL. Tasks like initialization are handled through updates to constant buffers, which generally reflect member data.

Aside from the programmability aspect, classes and interfaces were added to support dynamic shader linkage to combat the intricacy of developing with huge numbers of resources and effects. Dynamic linking allows the application to decide at runtime what shaders to compile and link and enables interfaces to be left ambiguous until runtime. At runtime, shaders are dynamically linked and based on what is linked all possible function bodies are then compiled and optimized. Compiled hardware-native code isn't inlined until the appropriate SetShader function is called.

The flexibility this provides will enable development of much more complex and dynamic shader code, as it won't all need to be in one giant block with lots of "ifs", nor will there need to be thousands of smaller shaders cluttering up the developers mind. Performance of the shaders will still limit what can be done, but with this step DirectX helps reduce code complexity as a limiting factor in development.

With all of this - the ability to perform unordered memory accesses, multi-threading, tessellation, and the Compute Shader - DX11 is pretty aggressive. The complexity of the upgrade, however, is mitigated by the fact that this is nothing like the wholesale changes made in the move from DX9 to DX10: DX11 is really just a superset of DX10 in terms of features. This enables the ability for DX11 to run on down-level hardware (where DX11 specific features are not used), which when combined with the enhancements to HLSL with OOP and dynamic shader linking mean that developers should really have fewer qualms about moving from DX10 to DX11 than we saw with the transition from DX9. (Of course, that's nothing new: the first DX8 games shipped when DX9 was out, and it wasn't until DX10 that we saw a reasonable number of DX9 titles.)

To be fair, the OS upgrade requirement also threw a wrench in the gears. That won't be a problem this time, as Vista still sucks but will be getting DX11 support and Windows 7 looks like a better upgrade option for XP users than Vista. Developers who haven't already moved from DX9 may well skip DX10 altogether in favor of DX11 depending on the predicted ship dates of their titles; all signs point to DX11 as setting the time frame when we start to see the revolution promised with the move to DX10 take place.

USER INTERFACE

Aero Desktop (improved)

  • Aero Peek: A specific open window or all open windows can be made transparent
  • Aero Snaps: Open windows can be snapped to screen borders
  • Aero Shake: Desktop items can be minimized or maximized by shaking them
  • Maximize a window by dragging its border to the top of the screen
  • Dragging the bottom border expands the window vertically
  • Dragging two windows to opposite sides will resize them to fill half of the screen

Windows Taskbar (the Superbar) (improved)

  • Graphic thumbnails for open windows
  • Switch between multiple windows by just hovering over the taskbar thumbnail
  • Icons are big enough to be selected easily with the new touch feature
  • Applications can use the taskbar to provide information (a progress bar for example)

Libraries (new)

  • Libraries are containers similar to folders, but their content is based on file properties such as file type, pictures by date taken, or music by genre
  • There are default libraries (documents, music, pictures, etc.) and one can create personalized libraries
  • Libraries can contain files, not network shares, which are indexed by Windows Search

Jump lists (improved)

  • Jump lists are automatically populated links in the Start Menu to frequently accessed sources (apps, documents, etc.)
  • Taskbar items, the Internet Explorer, and Windows Media Player will have them too, allowing you to jump directly to a certain task of a program

Windows Sidebar (improved)

  • Is no longer a sidebar
  • Gadgets are now placed on the desktop
  • Gadgets are resizable
  • Aero Peek lets you see gadgets behind open windows

Windows Explorer (improved)

  • New user interface (have to find out more)
  • New copy engine: fewer prompts, shows file names being copied, more reliable

Scenic Ribbon (new)

  • Paint and WordPad now have a ribbon similar to the one in Office 2007
  • Third-party developers can integrate ribbons into their apps

Start Menu Search (improved)

  • Searches in Libraries (also external files)
  • Search results are grouped according to Libraries
  • System administrator can define up to five external search destinations
  • Search will be executed on the server

Windows Search 4 (improved)

  • Input recommendations based on previous searches
  • Dynamic filters to narrow down results
  • New relevance algorithm
  • Word highlighting in results
  • Search Federation: search external resources (servers, Sharepoint, Web sites (OpenSearch))

Tablet PC enhancements (improved)

  • Supposed to have improved handwriting recognition
  • Supports handwritten math expressions
  • Personalized custom dictionaries, and supports new languages

Sticky Notes (improved)

  • Ink support
  • Paste support
  • Note colors
  • Resize possible

Accessibility (improved)

  • Improved speech recognition
  • Magnifier (whole desktop or portion of the screen)
  • Accessibility support tools for developers

Windows Touch (new)

  • Windows 7 can be controlled by touching the screen
  • It also supports multi-touch allowing you to use more than one finger

Other desktop enhancements (improved)

  • More styles
  • Region specific styles
  • Multilingual browsing no longer requires font installation, language-based fonts

APPLICATIONS AND FUNCTIONS

Internet Explorer 8 (improved)

windows-7-IE8 I will only discuss the major changes here.

  • IE8 is supposed to be faster and more stable
  • Rendering engine: IE8 will improve compliance with web standards; web sites can opt-in for IE7 mode for backward compatibility
  • InPrivate: Makes sure that the browsing history can’t be tracked
  • Accelerators: Allows you to invoke a web application by selecting an object on a web page (for example for blogging)
  • Web Slices: Users can subscribe to snippets on a web page, which will be updated automatically by the browser
  • SmartScreen Filter: This is the new name for the Phishing Filter. It has a new user interface, is supposed to perform faster, and has new heuristics, anti-malware support, and improved Group Policy support.
  • Automatic Crash Recovery: The browser will reload the pages after a crash

Calculator (improved)

windows-7-calculator

  • New user interface
  • Calculation history
  • Unit conversion
  • Calculation templates
  • Date calculations
  • Controls that are optimized for touch

XPS Viewer (XML Paper Specification) (improved)

  • New user interface
  • Relevancy-ranked XPS searched
  • Thumbnails provide an interactive view of several pages at a time
  • Preview XPS documents in Windows Explorer and Office

HomeGroup (new)

windows-7-homegroup

  • Windows 7 computers can connect automatically with each other to share resources (files, printers, etc.)
  • Users can decide what they want to share
  • Support for Libraries
  • Can be configured via Group Policy

MinWin (new)

The most comprehensive source about MinWin, the new mini kernel in Windows 7, is this interview with Mark Russinowich. Robert McLaws translated the main part for non-developers.

  • MinWin is fully bootable
  • Requires 25-40MB of disk space
  • It contains the executive systems, memory management, networking, and optional file system drivers
  • Components only call down the stack, and not up
  • It allows building of less bloated Windows editions that run on netbooks and other computers with limited hardware resources

Ready Boost (improved)

  • Allows you to concurrently use multiple flash drives.

Battery life (improved)

  • Reduced background activities when the computer is idle
  • Adaptive display brightness: Display brightness is reduced after a period of inactivity
  • Less processing power for DVD playback
  • Wake on Wireless LAN
  • Smart Network Power: Power of the network adapter is turned off when cable is unplugged
  • Better battery life notification
  • A new power config tool

Windows Media Player (improved)

windows-7-media-player

  • Supports more media formats
  • Improved performance
  • Taskbar thumbnail (displays titles, and offers controls)
  • Jump list in the Windows Start Menu and the taskbar
  • Stream media to other PCs at home
  • Stream media to (DLNA) v1.5-compliant digital media renderers

Media Center (improved)

windows-7-media-center

  • New user interface
  • Broader support for global TV standards
  • Share TV shows at home via TV Libraries

Sound (improved)

  • New standard Bluetooth audio driver
  • Automatically streams music or voice calls to the active output device
  • Control volume independently for each device

Process Reflection (new)

  • Crashed processes are cloned in memory
  • Windows 7 tries to recover the cloned process and diagnoses the failure conditions of the original process
  • It should reduce the disruption caused by diagnosis of failed process

Fault tolerant heap (new)

  • The new fault tolerant heap is supposed to reduce the number of crashes significantly.

SECURITY

User Account Control (UAC)

windows-7-uac-settingsWindows 7 has two new UAC settings:

  • Program-based changes only: Don’t notify when the user installs software or changes settings
  • Notify only: The user is only notified through a balloon message, but doesn’t have to confirm a prompt

The first one is the default setting in build 6801. Microsoft also says that the number of system applications and tasks that require elevation has been reduced. Please check out my more detailed article about Windows 7 UAC.

Action Center (new)

  • Consolidates alerts from Security Center - Problem, Reports, and Solutions - Windows Defender - Windows Update - Diagnostics - Network Access Protection - Backup and Restore - Recovery - User Account Control
  • A new icon in the notification area will be displayed whenever one of these apps needs attention
  • Thus, fewer notifications will be displayed on the desktop

AppLocker (new)

  • Restrict program execution on user desktops based on publisher signature
  • Example: Allow all versions greater than 9.0 of the program Acrobat Reader to run if they are signed by the software publisher Adobe

BitLocker (improved)

windows-7-bitlocker

  • Simplified deployment: Automatic repartitioning if deployed after OS installation
  • Data Recovery Agent (DRA): Single key for the whole organization that can recover data on any BitLocker-encrypted volume

BitLocker To Go (new)

windows-7-bitlocker-to-go

  • Encrypt portable storage devices
  • Make data protection of removable storage devices compulsory, network-wide
  • Require strong passwords or smart card via Group Policy
  • Supports read-only of encrypted devices on Windows Vista and Windows XP

Windows Defender (improved)

  • Now integrated with the new Action Center
  • New user interface
  • Better continuous monitoring

Windows Filtering Platform (improved)

  • Third party firewalls can selectively turn off features of the Windows Firewall
  • Third parties can add custom features to the Windows Firewall
  • Multiple active firewall profiles: Allows a single set of firewall rules for remote clients and for clients physically connected to the corporate network

Support for Fingerprint Readers (new)

  • Logon to Windows 7 using a fingerprint reader

Smart card support (improved)

  • Plug-and-play support
  • Support for ECC-based smart cards

Backup and Restore (improved)

  • Support for backups to network shares

System Restore (improved)

  • Now displays a list of programs that will be removed or added
  • System restore points are available in backups

Auditing (improved)

  • Configuration via Group Policy
  • Audit granted or denied access to specific information
  • Easier monitoring of the changes made by specific people or groups

DNS Security Extensions (DNSSEC) support (new)

  • Prevents DNS spoofing and other malicious activities

NETWORKING

Windows Connect Now (WCN) (improved)

  • WCN supports now Wi-Fi Protected Setup (WPS), an industry standard that simplifies WLAN setup.

Wireless device installation (improved)

  • A new a device wizard allows you to connect wireless devices such as printers or network attached storage
  • Drivers are downloaded automatically if necessary

windows-7-van View Available Network (VAN) (new)

  • One-click access to available networks (Wi-Fi, Mobile Broadband, Dial-up, VPN)

Wireless Device Network (new)

  • Windows 7 PC acts as a wireless access point

Mobile Link (new)

  • Installs wireless data cards without the need of additional software
  • Process is similar to connecting to a wireless network

Direct Access (new)

  • Remote connection to the corporate network is established automatically whenever Internet is available
  • Access to public Web sites is not routed via the corporate network
  • VPN is not required
  • Requires Windows Server 2008 R2 plus IPv6 and IPsec

VPN Reconnect (new)

  • Automatically reestablishes broken VPN connectivity

Offline Files (improved)

  • Offline files are now copied to the Offline Files cache and then synchronized in the background with the server
  • In this way, one doesn’t have to wait for files to be moved to the server after logging on
  • Improved management: More Group Policy settings, configurable time and time intervals for synchronization, maximum stale time, etc.

Roaming user profiles (improved)

  • Automatically synchronize users’ profiles with the server while users are still logged on
  • Users can roam from one PC to another while remaining logged in to both PCs and still have the same consistent environment

BrancheCache (new)

  • Caches content from remote file and Web servers on a server in a branch location
  • The cache can also be distributed across user PCs
  • Requires Windows Server 2008 R2

windows-7-rdp-audio RDP features (improved)

  • Automatic link integration into the Start Menu to RemoteApp apps and desktops on Windows Server 2008 R2 servers
  • Remote Desktop & Application feed: Allows end users to launch remote applications from a central location.
  • Multi-monitor support for virtual desktops
  • Support for bidirectional audio: Enables use of microphones for VOIP
  • Use of local printer driver possible
  • Multimedia redirection
  • Aero Glass support

ADMINISTRATION

VHD image management and deployment

  • Virtual Hard Disk (VHD) files can be deployed using Windows Deployment Services
  • VHD files can be managed using DISM (see below)
  • Boot from a VHD file: This feature allows the reuse of the same master image for virtual desktops (VDI) and physical desktops

Deployment Image Servicing and Management (DISM) tool

    Windows-7-dism
  • A command line tool that combines functionality of International Settings Configuration (IntlCfg.exe), PEImg, and Package Manager (PkgMgr.exe). It allows you to update operating system images (drivers, language packs, features, updates)
  • Vista image management tools still work for Windows 7
  • DISM can also be used for Vista

Dynamic Driver Provisioning

  • Drivers can be stored centrally on a server, separate from images
  • They are installed dynamically based on Plug and Play IDs or BIOS information
  • Reduces the number of drivers on an individual machine and potential driver conflicts
  • Reduces size and number of system images
  • Speeds up installation

Multicast Multiple Stream Transfer

  • Broadcast image data to multiple clients simultaneously
  • Group clients with similar bandwidth capabilities into network streams
  • Define minimum performance thresholds to automatically remove slower computers from a multicast group

User State Migration Tool (USMT)

  • Hardlink migration: Files are not moved on the hard disk but redirected to improve performance; discovers user documents at runtime
  • Support for volume shadow copy: Migrate files that are being used by an application

PowerShell

    windows-7-ise
  • Windows 7 will be delivered with PowerShell 2.0
  • PowerShell Integrated Scripting Environment (ISE)
  • PowerShell Remoting: Run scripts remotely on a single or multiple PCs
  • Script Internationalization: Localized messages
  • PowerShell Restricted Shell: Only certain commands and parameters are available
  • Automating Group Policy: Use scripting to manage Group Policy Objects
  • Support for logon, startup, and shutdown PowerShell scripts

Management of external machines

  • PCs connected via Direct Access can be managed via Group Policy
  • Can be accessed via Remote Assistance
  • Can be updated via enterprise management tools

Group Policy (improved)

  • New policies and preferences: BitLocker To Go, AppLocker, auditing, power management, task scheduling
  • Custom Group Policy preferences
  • Starter Group Policy Objects: Preconfigured administrative templates for recommended policies
  • URL-Based Quality of Service: Prioritize network traffic based on URL via Group Policy

Windows Troubleshooting (improved)

windows-7-check-for-solutionsWindows 7 has built-in troubleshooters for performance, programs, devices, networking, printing, display, sound, and power efficiency. (I have to find out what is really new here. Do you have any idea?)

Startup Repair (improved)

  • Startup Repair is now installed automatically
  • After an unsuccessful boot, Windows 7 will load Startup Repair and try to automatically repair the installation

Troubleshooting and support (improved)

    windows-7-torubleshooting
  • Problem Steps Recorder: End users can record their experience with an application failure with each step recorded as a screen shot along with accompanying logs and software configuration data
  • Windows Recovery Environment (Windows RE) enhancement: Windows 7 automatically installs Windows RE into the operating system partition by default
  • Windows Troubleshooting Packs: Collections of PowerShell scripts and related information that can be deployed via CAB files and executed remotely from the command line or through Group Policy; Troubleshooting Packs are built with the Windows Troubleshooting Toolkit, a GUI that is part of the Windows 7 Software Development Kit (SDK)
  • Unified Tracing: A tool that collects network-related event logs and captures packets across all network layers
  • Remote access to reliability data: Access data of the Reliability Monitor through Windows Management Instrumentation (WMI)

Device Management (improved)

    windows-7-devices-and-printers
  • Device Stage: A new central location for all external devices that provides information about the device status and lets you run common tasks; the device manufacturers provide the contents for their devices; examples for common tasks: media synchronization with portable media players, PIM synchronization, ringtone editor.
  • Drivers of new devices are automatically downloaded
  • Location Aware Printing: Windows 7 changes the default printer depending on the connected network (home, work, etc.)
  • Support for Bluetooth 2.1: simpler device pairing, better security, better power savings, Group policy support
  • Support for wireless alternatives to USB: Ultra Wideband (UWB), Wireless USB (WUSB), Wireless Host Controller Interface (WHCI), Device Wire Adapter (DWA)
  • Blue-ray Disc write support
  • Sensor and Location Platform: Support for devices such as ambient light sensor, GPS, temperature gauge, etc.
  • Display Color Calibration: Helps adjust an LCD display to be as close as possible to the sRGB standard color space
  • High DPI support: enlarge text display when using high resolutions (This is already possible now, not sure what is new here.)
  • ClearType tuner: (Not sure what is new; do you know?)
  • Improved support for external displays: Windows key + P to toggle between your laptop screen and an external display