The Nvidia Quadro K6000 and Kepler Technology Reviewed
Published:
In the world of the digital creative professional expectations are always on the rise and results are expected under ever-increasing time pressures. When project delay penalties can run into the tens of thousands you want nothing less than a pinnacle product. Nvidia’s Quadro K6000 is is just such a flagship unit, representing the the most advanced technology that Nvidia has brought to market.
The number for the K6000 speak for themselves so here they are:
2880 Stream Processors
240 Texture Units
48 ROPs
A core clock of 900Mhz
12 GB of GDDR at 6Ghz
A 384-bit memory bus
Max power usage of 225W
A 28nm manufacturing process
Apart from the fact that there is no faster workstation graphics card manufactured by Nvidia today, to really get the leap forward that the K6000 represents we have to compare it to the card that it replaces, the Quadro 6000. Although the graphics performance of the K6000 is only about 1.7 times that of the 6000, it has 5 times the compute power and double the memory. Clearly the Fermi design is following the modern trend of massive, parallel, GPU accelerated computing for huge data sets and as more professional software tools bake in support for this technology, the more important it will become.
The huge 12GB frame buffer and massive amount of shaders point to this being the closest you can come to a supercomputing solution in a workstation form-factor today. The fact that the K6000 achieves all of this while only using 21W more at peak than the Quadro 6000 is frankly mind-blowing. In terms of raw power the Geforce Titan Z gives the K6000 a run for its money in many applications, especially since the Titan Z allows for a quad-GPU solution whereas the K6000 will only work as a dual-GPU arrangement. We musn’t forget however however that Quadro cards are not built for performance at all cost, but the best performance that can be delivered 24/7, any day and every day. The GPUs put into the K6000 cards are top-binned components and the supporting electronics are of a much higher grade than those found within a consumer grade device. In addition to this you will have access to round-the-clock client support directly from Nvidia and can rest assured that the K6000’s drivers have been carefully optimised and certified for applications such as Maya and AutoCAD.
The leap in compute power from the last generation to this is so huge, that VFX and SciTech professionals have had new possibilities and practicalities opened to them suddenly that were simply unthinkable a single hardware generation ago. It’s been reported on Nvidia’s own blog that Pixar professionals who are using the K6000 can now do things in real-time that were previously simply not possible. Not to mention that 4K video production is now something that’s in reach with this card at high-speed, especially thanks to the huge 12GB frame buffer.
If you think the K6000 is what you need to get the job done, don’t hesitate to get in touch with us, our friendly and highly experienced staff will recommend a K6000-based build that will be optimized for your individual needs.
Whats the difference between an Intel Xeon and Intel i7 processor?
Published:
Consumer grade hardware has come a long, long way in the last few years. The gulf between high-end enthusiast equipment and workstation-grade professional hardware is not quite as wide as it used be. So you may be seriously tempted to go for cheaper (relatively) high-end consumer components when putting together a workstation computer. That can be a perfectly fine approach, you should spend your money on computer appropriate to your present and, hopefully, future needs. While knowingly and purposefully choosing a normal consumer component because you know it will do the job is not a bad thing, you need to understand the real differences between these two classes of hardware to make an informed decision.
In the case of CPUs it is an especially important choice to make since the specific socket and architecture of your chosen CPU(s) will affect all other component choices, such which motherboards are available and what memory you can use. In this post we will be looking at the key differences between the consumer enthusiast Core i7 CPUs and the server and workstation grade Xeon CPUs.
The first thing to get out of the way, is that the main difference between these processor lines is not performance. For all intents and purposes an i7 and Xeon when matched core-for-core and clock-for-clock have essentially the same computational power. That may sound like an open-and-shut case, but the way that CPUs are fabricated points to the first important difference between these CPU lines. When CPUs are mass produced they are quality checked and sorted into different bins. It is often the case that a particular architecture, because of its complexity and imperfect manufacturing methods, does not produce a very high yield of units that perform close to the theoretical optimum the architecture specifies. So a relatively small percentage of CPUs will test at the top and these become the high-end flagship products. The units that do not match up are often sold as lower specified units. Usually this just means setting the standard clock lower, but it can also mean that some physical on-chip components are also disabled or non-functional. Such as cache memory units or entire cores.
When AMD had yield-issues with some of its Phenom quad-core CPUs a few years ago it cleverly re-branded and resold them as triple cores, with the faulty core disabled. As a particular architecture matures yields go up, which means that in some cases perfectly good higher-end CPUs are artificially crippled to feed certain market sector demands. That is why some generations of chips have become legendary for their low price, but high overclocking potential. In some cases entire cores could be reactivated, giving you a free quad-core when you paid for a triple.
This is the key difference between the i7 and Xeon CPU lines. Xeons have a much more stringent set of parameters to adhere to in order to be binned as such. While i7 CPUs are binned for their performance characteristics, Xeon CPUs are binned for low-voltage and high-stability characteristics. Xeons are chosen for their ability to run 24/7 in enclosed server rackmount environments. Less voltage for a given clock speed means less heat and of course less electricity. Core i7 CPUs go into well ventilated high end PCs where the extra heat and power is not that much of a consideration. The Xeons also lack the on-die GPU of some of the i7 series, which further improves its thermal characteristics.
Xeon CPUs also support server-grade buffered ECC RAM, which has built in error correction hardware that normal desktop RAM lacks. If you matched this RAM with an i7 it would not even boot. ECC RAM is important where accuracy is mission critical, such as precise research applications, or where stability is essential. In any event, an i7 CPU will only accept up to 64 GB of unbuffered RAM, whereas a Xeon will take 512 GB+ of fully buffered ECC RAM.
Another very important factor is that only Xeon CPUs will work in multi-socket configurations, which means that only those chips can give you the maximum number of CPU cores in one PC.
Since Xeons are top-binned products the newest technology often appears there first too. At present Xeons have the only example of 12-core CPUs in Intel's range, with an i7 version sure to follow at some time when yields have become good enough.
Hopefully you will now have a solid understanding of what separates high-grade chips such as the Xeon from consumer grade products such as the i7.
For a great example, check out the TitanUS X650, which is an ultimate multi-threading machine sporting a maximum of 48-cores through a quad-CPU Xeon configuration. There is simply no way to achieve such a build using something like the Core i7
It seems like just yesterday that Windows 8 was released to what can at best be described as mixed reactions. Under the hood there were many improvements to the kernel and the stability, security and performance of the new operating system were measurably and markedly better than that of Windows 7. Unfortunately the GUI and UX were major sticking points for many consumers and an issue that was rightly criticised in technology media outlets. Since its original release we’ve seen one major update in the form of Windows 8.1, which has gone some way to addressing the UX and UI concerns. It would appear that at least one more update is in the pipeline for Windows 8, which is predictably named Windows 8.2.
The rumour mills have however begun to churn in earnest and it’s beginning to look as if the next major version of Windows will come to market in 2015. There is very little official talk about the next version of Windows, which we’ll just refer to as Windows 9 for the sake of convenience.
Before we say more, just know that almost any information about Windows 9 at this point could very well change by release time, that is if the info was accurate to begin with. Many of the details being reported by the technology press come from unofficial sources who are not affiliated with Microsoft.
The most obvious change from Windows 7 it seems is in the UI. While Microsoft is probably not going to do a u-turn on its touch screen driven strategy, the new version of Windows seem to concede that the schizophrenic split between between the traditional desktop and tablet-like Metro UI wasn’t the resounding success that had hoped. Shots of what are presumed to be from early builds of Windows 9 show a type of hybrid Start Menu that incorporated elements of both the old Start Menu and the Windows 8 Metro menu. Although this may also be from the Windows 8.2 update, which still means it is like to appear in Windows 9.
The growing mobile market has clearly caught Microsofts attention and so it seems as if the power management of Windows 9 will be significantly advanced compared to its predecessors. Taking advantage of modern CPU features that can virtually turn off circuitry when idle as well as innovations in the code. The other effect of this shift to a focus that also includes smartphones and tablets is the push for one unified Windows platform across all devices. Making it possible to write one “universal” Windows application that runs on everything. Windows 9 may very well be the first step to that unified platform dream.
There are a number of other speculated features such as better cloud integration, gesture support and windowed Modern UI apps, but the most interesting thing about Windows 9 might very well be how it is sold. According to some sources Windows 9 might be sold using a subscription model in the same vein as Office 365. It’s even been suggested that the base OS will be free with users paying for specific features.
Nothing we have heard about Windows 9 so far is directly of significance to the professional computing and workstation sector, but the new pricing model might very well make the new Windows a real alternative to clients that need multiple OS licences and have thus far opted for Linux. Price is of course not the only deciding factor by far, but it might be an interesting wrinkle.
Our Intel and AMD workstations have always sported the latest in software and we expect the new version of Windows to have better support for multithreading and perhaps even OS-level GPU compute support, although we have seen nothing to indicate this as yet. Rest assured however that we’ll get our hands on a preview copy as soon as possible and start benching the new software. By some account we may even see a preview build release before the end of 2014, so watch this space.
Just as a chain is only as strong as its weakest link, a computer is only as fast as its slowest component. Although magnetic hard disk drives have come forward leaps and bounds in terms of storage density, reliability and size their performance improvement over the years has come nowhere close to matching that of other computer technologies such as CPUs.
You might have some of the most powerful CPUs and GPUs on the planet in your workstation, but they’re nothing but expensive paperweights if all they do is wait to be fed with data. For many high-end computing applications mechanical hard drives are just not up to the task of keeping the pipes fed and this is where SSD technology comes in.
SSDs or Solid State Drives have no moving parts and consist entirely of electronic memory similar to RAM or thumb-drive flash memory. Because it’s completely electronic it isn’t hampered by the same laws of physics that hold back mechanical hard drives, such how fast a platter can spin or a read-write arm can move. Since SSDs are a relatively new technology, they are still more expensive and don’t offer as much storage when compared to mechanical hard drives. They are however catching up fast.
At TitanUS we recommend the Samsung 840 Pro SSD to most of our clients as an exceptional mix of performance and value.
Starting at 64GB, the 840 Pro is available in capacities up to 512GB. With a current price-per-GB ratio that ranges from $0.70 to about $1 the 840 Pro is not unreasonably priced. The SSD uses the SATA 6Gbps interface. Sequential read speeds have been measured at 540 Mbps and sequential writing at 450 Mbps. These are top-tier figures and ideal for sequential tasks where data is fed in a predictable way.
In addition to this the 840 Pro is very light and thin, measuring at only 7mm thickness, which also makes it a great option for one of our high-end Mobile Workstation Laptops. It’s low power consumption and heat generation extends battery life and for desktop workstation computers decreases strains on PSUs while saving on the electrical bill.
Because of the great value for money the 840 Pro drives represent, you can get over the issue of relatively small capacities by installing multiple drives or configure these already phenomenally fast drives into a RAID configuration that will see truly amazing sequential read figures.
How to Enable GPU Acceleration in Adobe Premiere Pro CS 5 for Unsupported nVidia GeForce Cards
Published:
Did you know that, since Premiere Pro CS5, Adobe has been weaving in support for GPU acceleration into the software? It’s early days yet, so at this point Premier Pro CS5 only supports NVidia’s CUDA GPU acceleration technology and on top of that only a small list of cards from Nvidia’s range, so if you don’t have a card from that list you’re out of luck, right?
Well, if you own a card from AMD then you really are out of luck, but it turns out that with a little bit of tweaking under the hood Premiere Pro CS5 will work with other NVidia CUDA-enabled cards that aren’t explicitly listed.
The process to do so is quite simple. The first thing you need to do is locate an executable called “GPUSniffer.exe”. This file is usually located in the “Adobe Premiere Pro CS 5” folder, depending on the specific version of the software you have. In addition to this it could be under different paths depending on the version of Windows. The easiest way to determine the correct folder is by right-clicking on the application shortcut you would usually use to launch the application and check the folder path under its properties. Use Windows explorer to check that GPUSniffer.exe is indeed present in that folder.
GPUSniffer is an application that some adobe products uses to identify your GPU and thus enable the correct supported features for that card. You’ll need the output from GPUSniffer to extract the identification string Adobe uses for your card. Although you can run GPUSniffer directly from Windows explorer, the program closes as soon as it has finished running, so there’s no chance to see the output in time. Therefore you will have to run the program from the Command Prompt.
If you’ve never opened the command prompt, it’s pretty easy. In older versions of Windows all you need to do is open the Start Menu, click on the “Run” option and then type “CMD” into the window that appears. Hit enter and the command prompt will appear. Change to the directory that contains GPUSniffer by typing “CD c:\program files\Adobe\ Adobe Premiere Pro CS 5” or whichever directory you previously confirmed the executable to be in. Once you’ve done that run the program by typing “GPUSniffer” and hitting enter. The program will now run and you will be able to see the type of graphics card it has identified. For example it may say “name: Geforce GTX 460”. Write down that exact name, including spaces and capitalisation. It’s what we need to go on to the next step.
In the same folder as GPUSniffer, you’ll find a text file named “cuda_supported_cards.txt”. Open this with Notepad or another text editor. You may need to run it in Administrator mode in order to edit the file. You’ll see a list of supported cards in the file already, just add yours exactly as it was listed by GPUSniffer on the first available line and save the file.
Before you start Premier, run the Nvidia Control Panel and under “Manage 3D Settings” choose Premier CS4, this will still work. Under “Specify the settings for this program,” go to “Multi-display/mixed-GPU acceleration” and choose Compatibility performance mode. Remember to apply your settings. When you now run CS5 and go to the settings for the Mercury Playback Engine, you should see CUDA acceleration in addition to software rendering as an option.
Remember to test the feature to ensure that it is stable and works correctly, since this isn’t yet officially supported by Nvidia or Adobe there are no guarantees.
What’s in a name? If you’re going to name one of your products the “Titan” you better put your money where your mouth is and it looks as if Nvidia have done just that with the Geforce GTX Titan Z. The TItan Z sports two full-fat GK110 GPUs for a staggering total of 5760 CUDA cores. Not even the formidable Quadro K6000 comes close to this as a single-card solution. The Titan Z has 12GB of GDDR 5 RAM, which also matches the memory allocation of the K6000, but since the Titan’s 12Gb is mirrored and split between the two GPUs the actual effective framebuffer is 6GB, which is the number you should keep in mind when considering.
The Titan Z is not the fastest card in terms of per-GPU performance, but in terms of GPU parallel processing there is nothing that puts this many computing elements in this form factor, period. Add to this the fact that the Titan Z sports full-speed double precision and the picture starts to come together. At about $3000 the Titan Z might seem expensive, but when one looks at how favourably its performance compares to professional workstation cards that are much more expensive it really begins to make sense. If you’re looking for GPU computation power, why not just use a Nvidia Tesla card? Apart from the fact that Tesla cards are far more expensive, with the single-GPU Tesla 40C clocking in at a cool $10K, but that it out-specifies it by a large margin.
It may be true that the Titan Z, not officially being an enterprise card, does not have the level of enterprise support that Quadro and Tesla cards have. However, for the user that wants to put the most GPU-computational power into a workstation- or even normal-sized case and have it under a desk, this is essentially the ultimate solution. It’s also the most practical way to get four GK110 GPUs into one tower case. Quadro cards that sport this top-end chip are limited to dual-GPU configurations and for consumer cards such as the Titan Black (the single-GPU version of this card) a quad-card solution is prohibitively complex and extremely power hungry. For thermal reasons the Titan Z is clocked lower than the Titan Black, which means that it is not quite as fast on a per-GPU basis, but this can be rectified by replacing the stock air-cooling solution with liquid cooling or another advanced solution. This would still be far less expensive than enterprise solutions.
In theory you could put two cards in one tower case and have a total of 11520 CUDA cores in one computer that will go under your desk. The best part is that the Titan Z is certified with drivers in the way that normal consumer Geforce cards are, which means that when you are done with your GPU computing task your workstation will happily run all the software and games that any other Geforce card will. More than anything else, it’s this mix of flexibility and uncompromising performance at a great price that really makes the Titan Z stand out.
If you’re convinced that the Titan Z is the card for you, then you can choose it as an option in our X495, X525 or A275 workstation builds.
The Quadro K4000 is one of those watershed hardware releases that really makes you go “What were they thinking?” in the back of your head. The K4000 represents that rare confluence of technology and business that sees the release of a mid-range part that seem as if they belong with more expensive products in their range.
In the desktop space we saw this in 2007 with the release of the G92-based Nvidia Geforce 8800GT, which was a mid-range part with performance so good that it brought the relevance of much more expensive cards into question. The Kepler-based K4000 is just such a card. In terms of performance it stands toe-to-toe with the Quadro 5000 from the previous generation Fermi architecture while costing half as much. In terms of price versus performance no other workstation-grade card offers this much value. Even when compared to the next card in line, the K5000, the performance gap isn’t that great. Additionally, since the new Kepler architecture is so power efficient (max power consumption is a frugal 80W), the K4000 offers a staggering power per Watt figure.
One omission from the K4000’s feature set is SLI, which means that direct multi-GPU configuration is impossible, but of course for CUDA processing it should be possible to use a second card without SLI to give a performance boost in applications that support it. The K4000 is also relatively compact thanks to its single-slot cooler design. Any workstation with an open PCI-E 16x slot can take one of these cards without fear of width issues, although of course you should check if the case is long enough.
Let’s have a look at some of the numbers for the K4000:
CUDA Cores 768
Gigaflops (Single Precision) 486.4
Gigaflops (Double Precision) 243.2
Total Frame Buffer 3 GB GDDR5
Memory Interface 192-bit
Memory Bandwidth (GB/sec) 134 GB/s
Maximum Power Consumption 80 W
Don’t forget that, like every other Quadro card, the K4000 enjoys the same additional layer of QA and support from Nvidia as well as finely-tuned drivers that are optimized for professional applications such as Maya, Solidworks, AutoCAD and the like. Although there may be consumer cards with seemingly higher performance for much lower prices these are not comparable to something like the Quadro K4000. Have a look at our article on the difference between desktop and workstation GPUs for more information.
The K4000 needs to be viewed from the perspective of pro-grade cards and when seen from that angle it’s an absolute steal at about $800 retail. Its low-power consumption, single slot design and the need for only one 6-pin power connector means that it can go into workstations that have components that are not necessarily top-end, high power units. The addition of a K4000 to such a system will nonetheless yield a powerful digital content creation and design machine, whether it’s 3D modeling or high-resolution video editing the compute power provided by this card will seriously improve performance and productivity.
After months of rumblings and rumours around Intel’s next generation of high-end desktop CPUs, they are finally out in the wild. These new parts have been highly anticipated and we know that many people have been holding out on upgrading their existing setups until we knew more about the pricing and performance of these chips. Now, Haswell-E or “Core i7 Extreme” CPUs are heading for the shelves. Let’s have a closer look at the new architecture and the new developments it brings to the fore.
The first thing worth mentioning (and if you know anything about Intel you’ll know what it’s going to be) is that Haswell-E uses a new socket. So any hope of a drop-in upgrade can be put to bed straight away. The new Haswell-E CPUs use the new LGA 2011-v3 socket. The pin-count and arrangement remain exactly the same, but are electrically incompatible. As with other socket 2011 variants, 2011-v3 is keyed in such a way to prevent incompatible CPUs from being placed in the wrong socket. This of course means that if you want a Haswell-E system you need to build one from scratch. The chipset you’re looking for here is the X99, which represents the cutting edge for Haswell-based desktop computers at present.
One of the most significant and visible changes brought about by this new CPU family is the introduction of an octa-core CPU to the desktop PC market. This represents an overall shift that blurs the line between high-end desktop and entry-level workstation CPUs when it comes to multi-threading. What only a little while ago required a dual-socket motherboard and two quad-core Xeons can now be accomplished in a standard, single-socket consumer machine. This might be the start of a real shakeup at the interface between enthusiast-class and lower end Xeon computers. That is, at least until the 14nm Broadwell-based silicon comes to market.
There are currently three Haswell-E parts available:
The flagship Core i7 5960X, with eight cores, sixteen threads and a 3Ghz base clock.
The i7 5930K, with six cores, twelve threads and a 3.5Ghz base clock.
The i7 5820K, with six cores, twelve threads and a 3.3Ghz base clock.
All of these chips are rated for a 140W TDP, so the need for serious cooling is a given. The top CPU has 20MB of cache in total while the other two sport 15MB each. The bottom range chip (relatively speaking of course) has 28 PCI-E lanes while the other two have 40. Prices range between an (estimated) $999 for the eight-core unit down to $389 for the 5820K.
The other big change is the move from dual-channel DDR-3 to quad-channel DDR-4. That’s probably one of the main reasons a new socket was necessary, as the memory controller resides on the CPU die itself. Intel claims that the 5960X, when compared to the old i7 4960X, performs 20% faster in 4K video editing, 32% faster in 3D rendering and 14% faster in video game physics and AI. That might not seems like a huge leap, but considering how powerful the 4960X is, these figures represent significant increases in absolute processing power and brings system performance in line with lower-end Xeon-based workstations that end up being significantly more expense when the final bill comes.
The new Broadwell-EP architecture which is scheduled for release late in 2014, will also be socket 2011-v3, which presumable means that building a Haswell-E system now leaves an upgrade path open to Broadwell-EP CPUs a little further down the line. Broadwell is rumoured to be a 14nm, 16-core (!), 32-thread design with a max TDP of 160W. That’s something that makes building a Haswell-E system now a very enticing proposition.
These Haswell-E chips are certainly beastly, but the new X99 chipset also represents a long-overdue update for connectivity. The X99 supports:
Up to 5 USB 3 ports
8 USB 2 ports.
8 PCIe 3.0 lanes
10 SATA 6Gbps ports
There’s also support for the installation of a Thunderbolt 2 card, depending on whether individual motherboard manufacturers find adding the extra physical connector to be worthwhile. This last addition especially makes it look as if intel is gunning for the workstation market from the other side of the line. Thunderbolt 2 connectivity is not something non-professional users are likely to find useful, so this feature is very revealing with regards to Intel’s overall strategy.
With a socket that will support at least one more generation of cutting edge CPU technology, the long neglected inclusion of USB 3 and 6Gbps to the high-end (following on the X79 Express chipset) and a brand new memory technology, Haswell-E certainly deserves the excitement that it is getting. “Prosumer” level users who want to have a machine that will do double duty as a general purpose work/gaming machine and and entry-level workstation for research and design applications may have found a new champion in the Intel Haswell-E design.
Keep an eye out as TitanUS will be updating many of our most popular machines with Haswell-E technology soon. Reading about these CPUs has been great, but we can’t wait to get some test units in the shop to really see how far we can push the technology.
Back in the old days cooling wasn’t something anyone cared about when it came to desktop computers. Big supercomputers like the Cray II were using elaborate liquid cooling solutions as far back as the 80s, but for most users the CPU fan (there was no such thing as a GPU) was something you never thought about.
After desktop CPUs began hitting speeds well above 1Ghz and especially after enthusiasts realised they could push the silicon way beyond factory specifications, the high performance cooler market exploded. Today you can find a bewildering array of coolers using a variety of technologies: from good old vanilla air cooling to exotic thermoelectric and liquid solutions, there are many ways to chill your hardware.
To us, cooling is about stability, longevity and reliability. The workstations we build, be they Intel or AMD, often have to run at 100% load for long stretches of time. One 140W CPU is bad enough, how about two or even four? Then our clients also expect us to squeeze all of that heat into a tower case that will fit under their desks. Clearly we need cooling solutions that are a cut above the rest.
It isn’t just about keeping things cool either, noise management is nearly as important. Especially when working in quiet environments or when many computers are operating in one room. A mild hum can become deafening when multiplied by 100 server racks.
Finally, they have to look good. Okay, that’s not important from a functional perspective, but we build our machines to a high standard which includes top notch machining and tidiness to a fault. One ugly component can ruin the look of a computer, especially if you have an attractive case with windows or mesh viewing ports.
We’ve tried more brands of cooler than we’d care to count, and none of them hit the sweet spot between performance, noise and quality the way that Noctua’s products do. As high-performance Intel- and AMD- workstation builders we have to make sure that our equipment is rock solid when it comes to thermal management. A gaming system builder might get away with solutions that are low on quality and high on flashiness, but if we get it wrong we get broken computers, unhappy customers and a sore bottom line.
Every TitanUS computer has a Noctua cooler either as standard or as an option. We always recommend that our clients fit their machines with Noctua products for their peace of mind and ours.
We don’t have the space to discuss the whole range of coolers, obviously, but there are four units in particular that should be a good sample of these fansinks overall.
The first cooler we’ll look at is the Noctua NH-L9i, which is at the lower end of the scale. This sub-$50 cooler weighs in at 420g and supports a single 92mm fan. This cooler is a low-profile unit that’s specially designed for use in slim cases and with mini-ITX motherboards. It’s primarily aimed at being very silent. It even comes with a low-noise adapter that lets you further reduce the RPM of the fan, making it virtually silent, as long as the CPU has a TDP of 65W or less. The NH-L9i doesn’t cool any better than the stock Intel unit it replaces, but it does so much more quietly with the guarantee that it won’t interfere with RAM, graphics cards or very slim cases. It’s a brilliant little cooler for those impressive mini-ITX powerhouses.
The NH-L12 is slightly bigger, slightly heavier and slightly more expensive (approx. $70) than the NH-L9i, but gives a bit more versatility in its application. By default this cooler has a unique dual-fan design, with a 92mm fan underneath the radiator and a 120mm fan on top. The 120mm fan can be removed to reduce the profile of the cooler, which makes it ideal for mini-ITX builds. In general the NH-L12 is a better choice than the L9i as long as you don’t have any components that will interfere with it and of course the L9i is quieter with the low-speed adapter fitted. With the 120mm fan in place this is a great cooler for µATX builds.
Noctua NH-L12 and NH-U14S
When we look at full-sized tower coolers the NH-U14S is a solid choice. At a price of approximately $80 is isn’t far off the NH-L12, but it is designed for an entirely different purpose. Standing at a respectable 165mm, this isn’t going in a slimline case any time soon.The NH-U14S comes with a 140mm fan as standard and, as with other Noctua coolers, lots of small details aimed at making everything quieter. Just as with the L9i you get a low-speed adapter to quiet things even more, but with a max noise level of 24,6 dB(A) it isn’t exactly a screamer. Nonetheless, assuming your CPU has a low enough TDP, using the low-noise adapter cuts that maximum figure down to 19,2 dB(A).
The U14S performs right in the middle of the pack in terms of thermal performance, but at the top when it comes to noise levels. Add another identical Noctua fan and the thermal performance equals some pretty elaborate and expensive high-end air coolers without sacrificing much in the noise stakes. The U14S is a great all-rounder for general system builds which doesn’t cost a fortune and equals or outperforms expensive and noisy solutions from other manufacturers.
The big dog in this selection is the NH-D14. This thing is massive at 1240g. It has a huge twin-radiator six-heatpipe design with on 140mm and another 120mm fan as standard. The clever thing about the D14 is that the radiators are asymmetrical and the 120mm fan position can be moved somewhat, allowing for good compatibility with big RAM modules, despite how gigantic the cooler is. The D14 is so powerful, it actually produces internal case airflow, cooling other component as well. At full throttle this cooler only hits 33dB, literally 7 decibel quieter than a whisper. In fact, every single Noctua cooler mentioned here maxes out at a noise level most people would fail to hear at all.
The final note on Noctua products and why we use them has to be about reliability. Most of their coolers carry a 6-year factory warranty and the fans generally have MTBFs of more than 150 000 hours. These aren’t flashy coolers that keep an overclocked CPU just under melting point for an hour or two, these are coolers designed to keep the CPU in it’s healthy operating range at stock speeds, quietly for years without any complaints. That’s why we love them, they are made according to the same philosophy of quality and performance we put into our system builds. Machines that work all-day, every-day, no exceptions.
The Quadro brand has really become synonymous with the concept of a workstation computer. Although AMD’s FirePro series are, in general, excellent parts they just don’t have the brand presence of the Quadro products. Which is why new additions to the line always makes us sit up and pay attention.
Nvidia has released five new cards, although none of them unseat the K6000 as their flagship professional graphics solution. By then end of September 2014 all of these cards should be available for purchase, although we always recommend checking with us first when enquiring about a specific component. Each of these cards has a unique place in terms of market segment and intended purpose, so we’ll look at them individually starting with the entry-level unit, the Quadro K420.
Nvidia Quadro K420, K620, K2200, K4200 and K5200
Quadro K420
The Quadro K420 is, as you might expect from the number, an entry-level card. In terms of performance it certainly isn’t going to set any benchmarks on fire, but at 40W TDP with professionally certified drivers and a sub-$200 price point there’s much to like about the K420.
The K420 is based on the Kepler GK107 GPU, but has 192 CUDA cores rather than the reference design’s 384. As standard this card comes with 1GB of DDR3 RAM with a 128-bit bus. The maximum supported resolution (using Displayport 1.2) is 3840x2160, which is of course the 4K video standard. This makes the K420 a good card for working with 4K content. Light CAD/CAM tasks and low-end 3D renderings should also not pose a problem. It also helps that this is an ultra-quiet, low profile unit, making it even more attractive for a studio environments. On top of this, the K420 supports up to four displays simultaneously (with an MST hub), which is a pretty attractive workstation feature in such a low-cost card.
Quadro K620
Unlike the K420, the K620 sports the new Maxwell GPU design instead of the older Kepler chip. The K620 has 384 CUDA cores, a 128-bit memory bus and 2GB of DDR3 memory. The K620 replaces the K600 and it appears to be quite a bit faster than its predecessor.Memory bandwidth remains unchanged, so if that was a performance constraint before it won’t be any different this time around, there is however twice as much RAM available compared to the K600, which certainly makes the K620 much more versatile as a workstation card. Like the K420 the K620 is a low-profile, low power card. Clocking in at a mere 45W. Therefore, it too is very promising as a quiet environment graphics application where CPU power is more important than GPU power, but you still want the stability offered by workstation grade components. Between these two cards it really comes down to how much you need the extra grunt, because in other respects they are by and large the same.
Quadro K2200
The K2200 is the first of the “serious” parts of those discussed here. Despite not being the most powerful card here, in one way the K2200 is a flagship card. It is one of the most powerful sub-75W workstation cards available. If we keep the K2200’s power budget in mind, the specifications are all the more impressive. The Maxwell-based K2200 comes with 640 CUDA cores and 4gb of RAM, clocked at 1Ghz and 5Ghz respectively. The memory is fed by a 128-bit bus. The K2200 outperforms its predecessor, the K2000, by 78%, which is one of the larger generational leaps we’ve seen. At single precision this card will do 1.3 TFLOPS.
The K2200 is certainly in a position to make more expensive cards from the previous generation sweat a bit, in performance testing the K2200 decisively outperforms the K4000 in all but a few benchmarks. In fact, the K2200 isn’t that far off the pace of the next two cards on the list. For a sub-$500 card the K2200 throws quite a punch indeed.
Quadro K4200
Speaking of the K4000, it’s official replacement comes in the form of the K4200. The K4200 brings an overall 75% performance bump compared to the K4000. That’s 2.1 TFLOPS achieved with 1344 CUDA cores and 4GB of RAM at a cost of 105W at the outlet. A very welcome new feature on the K4200 is the inclusion of Quadro Sync, which was previously exclusive to the K5000 series. The K4200 represent the current performance cap on single-slot Quadro cards and a relatively modest power draw. For small form factor builds this is about as good as it gets at present and the K4200 certainly raises the bar for entry into the high-end segment.
Quadro K5200
While not quite toppling the K6000, the K5200 is nevertheless a beast of a card. 2304 CUDA cores, a 256-bit memory bus and 8GB of RAM is nothing to sneeze at. That’s a 36% increase in compute power and an 11% increase in memory bandwidth compared to the K5000. The K5200 also adds support for ECC memory, a feature that was lacking in the K5000. There isn’t much more to say about this brute other than that it is a brute. What is significant is the fact that Nvidia was able to put this much high-end performance into a card capped at the 150W level.
These new cards are definitely worth looking into, even if you are currently using their immediate predecessors. Performance improvements, especially in terms of performance-per-watt has improved tremendously throughout the range. It’s also very nice to see high end features from higher cards move down to the K4000 series. All the cards listed here save for the K420 are easy to recommend, in the case of the K420 you would only choose this card if you absolutely could not afford the K620, but our recommendation would be to go for bigger sibling if at all possible. Nvidia have once again shown why they are still a dominant force in professional graphics and we’re excited to see what builds we can come up with using this new generation of GPUs.
The ASUS KGPE-D16 Dual Opteron Motherboard - Heart of a Budget Server
Published:
There was a time, about a decade ago, where it really seemed as if perpetual underdog AMD was beating Intel on both price and performance. The Athlon XP desktop processors were giving the Netburst Pentium 4 CPUs a solid whipping by being both cheaper and faster and if it weren’t for the work of Intel’s Israeli engineers on the Centrino Pentium-M, we might have seen an AMD dominated market today. Today however, Intel is firmly in the lead and there’s no argument that they make CPUs with the highest outright performance. That’s only half the battle though, when it comes to the price-versus-performance ratio, Intel doesn’t have it all its own way. As a total system value proposition AMD generally has a good deal to offer. When you take into account that they rarely change CPU sockets between generations and also have less expensive motherboards that performance gap might not seem worth the price premium.
It’s within this context that we have to view the ASUS KGPE-D16 dual Opteron motherboard. This board supports multiple generations of AMD opteron CPUs via the G34 socket. The top-end chip that will work on this board is the AMD Opteron 6386 SE. A 2.8Ghz, 16-core server CPU. Which means, since this is a dual-socket motherboard, that the KGPE-D16 can form the basis of a 32-core server or workstation system. There is also support for up to 256GB of registered DDR1600. That’s of course if you’re looking to max out the board, but the real value of the KGPE-D16 lies in the mid-range builds that it is capable of at amazingly low prices. There are 16 memory slots, so reaching high RAM numbers by using inexpensive low capacity DIMMS is a possibility. There are also CPUs in the Opteron range that are now quite cheap. Especially some Opteron 6100 octa-cores and 6200 12-core chips. If your workloads care more about threading than clock speed this might just be solution you’re looking for.
To further bolster the workstation credentials of the board, there are five PCIe V2 x16 slots, which should suffice for most graphics configurations. Be advised that you can only use four simultaneously at most, the ASUS board has various slot configurations for different purposes.
It must be said, there is a lack of IO options on the back panel. Since this board is mainly aimed at server use you’ll only find two USB ports and no audio panel. It’s a good idea to pick a case with ample USB expansion backplates. ASUS also offers an MIO module for audio as a concession to workstation needs.
In terms of server duties there are many storage options. As standard the KGPE-D16 supports six SATA II drives, but also comes with the proprietary ASUS PIKE connector that provides the option to add various PIKE cards for RAID implementations. There are actually 14 disk ports on the board, but without a PIKE card eight of them aren’t functional.
Two Intel gigabit network interface cards are built into the board as well as another Realtek management NIC. Rounding out the networking chops of the device.
The final server-grade feature that makes this board stand out is advanced remote management. By using the optional ASUS ASMB4-iKVM module you can even mount and install an ISO remotely by using a JAVA KVM. You can do remote ROM flashes, enter the BIOS and a whole list of other remote functions.
As you can tell, this is a serious piece of hardware. ASUS haven’t skimped on reliability either. The KGPE-D16 employs long-life Japanese capacitors, which are rated for over 5 years of continuous use at 86C.
This board really is a starting point with a heap of flexibility. From a massively threaded workstation that won’t break the bank to a low-cost server that will do multiple VMs with ample RAM and storage, there are few applications the KGPE-D16 isn’t suited for. In addition to this the board is reliable, compatible and priced very well, especially when you take the cost of AMD CPUs into account. We really recommend that you don’t dismiss the idea of an AMD-based workstation or server, and keep an eye out for our versatile builds based on the KGPE-D16.
When it comes to performance parts it can be argued that GPUs are somewhat sexier than CPUs. Of course in the professional computing world we care more about CPU performance than the general user, but with the rise of GPGPU computing and affordable consumer grade GPUs with thousands of processors new high-end toys are always of interest.
So we were quite excited to hear that Nvidia had released two new high end cards; the GTX 980 as its flagship and the GTX 970 as the upper-midrange model.
The GTX 970
Coming in at just over $300 (depending on the manufacturer) the GTX 970 is probably going to be the more popular of the two, so let’s have a look at the specifications of this card first. The GTX 970 is squarely in competition with AMD’s R9 290X card, in fact very recently AMD has slashed the price of both the R9 290X and 290 in response to the release of the two cards under discussion here, so you know Big Red is worried about the green team here.
Because there can be so much variation in 3rd party cards, we’ll only look at the reference specifications provided by Nvidia, but of course there’ll be plenty of factory overclocked and custom-cooled cards on shelves, so keep an eye out for the best versions of these chips. The 900 series cards have only been out for a few days at the time of writing, so no clear favourite has emerged.
According to Nvidia, the GTX 970 has the following key specifications:
1664 CUDA cores.
Clock speed of 1050 Mhz (1178 boost)
4GB of 256-bit GDDR5
Max resolution of 4096x2160 (4K)
Power draw: 145W
Minimum PSU 500W (2x6 pin power connectors required)
The specifications already look great, but what’s really impressive is how low the power requirements are. A sub-150W card with this sort of horsepower is a real leap forward and is probably one of the best showings for the Maxwell GPU architecture. This probably also the first card that is in any way practical for gaming or other 3D applications on a 4K monitor. We might be witnessing the first steps of 4K into the consumer mainstream, albeit at the higher end of the market.
The most promising thing about the GTX 970 is not that it’s much faster than the GTX 770 (the difference is substantial, but not earth shaking), it’s that the GTX 970 is faster while using 80W less power at peak consumption. Even the GTX 760 is rated at 170W as stock and it isn’t in the same league as the GTX 970. That makes this card a very compelling upgrade for existing machines, since you won’t need a PSU upgrade. As a SLI solution it’s even more attractive since there are older 300W cards that won’t match a single GTX 970 which can be swapped for two 970s. The GTX 970 is certainly going to be a star component for building desktop rigs with frankly crazy amounts of GPU power.
The GTX 980
The new flagship card from Nvidia continues the theme of low-power and high performance. According to Nvidia’s reference design the key specifications are as follows:
2880 CUDA cores
A base clock of 875Mhz (928Mhz boost)
3 GB of 384-bit GDDR5
Max resolution support of 4096x2160 (4K)
Power consumption of 165W ( requires a 600W PSU and 1x 8-pin and 1x 6-pin connector)
Everything that’s been said about the GTX970 counts in this case as well. In fact, both have the same GPU, with some processor units disabled on the 970. The performance gap between the two card is admittedly not small, but the the price difference is even greater at about $250. Still, this card outperforms the old GTX 780Ti by a noticeable margin while cutting the power consumption down from 250W to 165W. Many users (with the available cash) could whip out their 780Ti and replace it with two 980s with only a relatively small jump in power consumption. For those hellbent on having a flagship card for whatever reason the GTX 980 is the card to have, but (and it’s a big but) for an extra $100 over the asking price for a single GTX 980 you could have two 970s. That’s a pretty hard deal to pass on in our opinion.
Bells and Whistles
Raw performance and power consumption are not the only factors that should be taken into account here. The 900 series cards also bring support for DirectX 12 and a whole list of new tricks, such as a new voxel-based lighting technology and another technology that Nvidia claims can make 1080p displays appear similar in quality to 4K displays. In our experience new features such as these aren’t usually worth making a purchase by themselves when no one is implementing them in software yet, but since the 900 series cards can stand on performance terms alone they certainly are nice to have.
Conclusion
The initial impressions and reviews of these two cards over the first few days of their release have been very positive and it seems a safe bet to recommend them. Especially the GTX 970. If you currently have a power hungry card like the 780Ti we recommend getting two 970s and replacing that single card with a SLI solution, if your other hardware allows for it. That should double your GPU performance for almost the same money as a GTX 980. We can’t wait to start putting together test builds using these new cards. Keep an eye on the TitanUS Facebook page for new Intel and AMD workstations using the GTX 970 and 980, they’re sure to be worth a look.
What Does the Microsoft HoloLens Mean for Professionals?
Published:
Microsoft dropped some big bombs during its Windows 10 keynote recently. We’re looking at Windows 10 itself in a different article, but the announcement that really has everyone buzzing is the Microsoft HoloLens.
In the computing world we’ve pretty much assumed for almost two decades that wearable computers that use natural inputs would eventually become a reality. 2015 had even been predicted to be the “year of the wearable” with smart watches and fitness devices rising in popularity, but no one was expecting something as advanced and tangible as the HoloLens this soon.
The HoloLens is essentially an augmented reality (AR) headset. In other words, it’s a device that overlays computer graphics on your visual field. There’s are lots of AR apps available for smartphones that do this sort of thing, but what the HoloLens promises is way in advance of anything we’ve seen before.
The HoloLens is meant to be a fully independent Windows 10 computer, untethered from any other system. It’s fitted with precise 3D head tracking and advanced spacial sensors that map out your environment, allowing the system to project virtual object onto things like walls and table surfaces with a high degree of accuracy.
The “secret sauce” of the system seems to be a special HPU or holographic Processing Unit, that renders object in your visual feed while taking actual optical physics into account. This makes them appear as real and solid rather than crudely overlaid as is the case with current AR technology.
Think of it as a “holodeck” from Star Trek, that you can only see while wearing this special headset. At the very least a device such as the HoloLens could spell the end of display devices such as flatscreen monitors and televisions.
The demo from microsoft really has to be seen to really understand the potential of this device, but it’s the potential implications for professional computing that’s really exciting. A New Era in Workstation Technology?
Although consumers are clearly a prime target for the product, Microsoft has made clear its intention for enterprise applications. This has implications for professional computing applications across a wide range of industries. It will affect how we create, what we create and how our clients consume our products.
As a Display Technology
Based on what we’ve seen from the demo material currently available the HoloLens will be able to simulate all types of display devices, from traditional flatscreens “mounted” on walls to floating images that turn with your head. Presumably, despite being a fully functional mobile computer, some form of HoloLens unit will act as a display for a wirelessly connected local computer much more powerful than mobile technology allows. It might not be surprising to find people’s workspaces suddenly become very spartan, but once you’re wearing a HoloLens it turns out they have elaborate, customized display solutions that take up their entire room.
What will you do with this increased scope in interactivity and presentation. Will you now directly manipulate 3d models as if they were physically there using another technology such as the Myo? Knowing that you client or peers will also be using something like a HoloLens, how will you present data or engineering models to them? If you are creating a CG movie, will you now take a walk through a virtual “set” in the same way Microsoft demoed walking on the surface of Mars?
If HoloLens works anything like Microsoft promises it will, heck even if it only works half as well, it could change everything about how we work with digital information.
As a New Form Factor
If we take the leap and assume that something like the HoloLens will become the default new visual and audio interface for computer devices this opens up many new applications that don’t currently exist. We will still want to entertain and inform as we do today, but the scope for new ways to do old things will widen exponentially.
What if your CG movie is now designed to be seen from the inside? What if you provide your potential investors with a real-life experience of what you’re proposing rather than just telling them in words or showing them charts and animations? How will this change out work collaboration?
It’s Here, Ready or Not
Microsoft wasn’t just showing a mockup of the HoloLens at the keynote. There were real, working prototypes which journalists were allowed to try. They’re experiences are there to read all over the internet. This is something Microsoft is planning to release to market within the same timeframe as Windows 10. This is not just a concept, this is coming soon, whether we like it or not. Even more encouraging is that there are competitors in this market, such as Magic Leap and Daqri
Whether HoloLens becomes a commercial success or someone else pulls it off, the age of mixed reality is upon us. We can’t wait.
It can be a real pain to keep up with new protocols and connector standards, especially if a lot of time passes between your upgrades or new computer purchases. So, like many people, you might be slightly confused by the seemingly sudden appearance of new connectors meant to hook up SSD storage devices to the rest of your computer.
Left: mSata SDD - Right: M.2 Sata SSD
We had it good for a while. With PATA drives relegated to the dustbin of history all you had to worry about was those neat little SATA connectors, but high-end SSD performance had caught up with the limitations of the SATA protocol and something new was needed. So we began to see PCI-express SSDs. Yes those other PCIe slots, the ones that don’t take graphics cards, now actually had a use. PCIe SSDs broke the theoretical speed limit of SATA 3 drives by about 30%, making many benchmark fans very happy indeed.
SATA fought back with the new SATA-express standard raising the theoretical limit to 1250MB/s, 40% more than SATA III could hope for not quite.
So, that’s the backstory. While this fight was going on in the desktop space mobile versions of these protocols had to be found. Very small form factor PCs such as ultrabooks were after all ideal for SSD storage, but the desktop versions of these parts would never fit. That’s where mSATA or mini-SATA came into play. This was a miniaturisation of SATA III and matched its theoretical performance, but used tiny stripped down PCBs. In the beginning these were small drives under 10GB, but these days 128GB isn’t uncommon. mSATA slots then found their way back to desktop board where they were used as unobtrusive storage solutions.
Now you’re welcome to basically forget about mSATA, since it’s been replaced with the new and improved M.2 connector. If you’re building something new mSATA is a dead-end. Although you’ll find drives to keep a current machine going.
M.2 Sata Port
The M.2 connector is what you’ll find on new motherboard, but there is a very important issue introduced by it you should be aware of; There are two types of drive that will plug into M.2, but they aren’t cross compatible.
M.2 SATA uses the SATA III protocol at 6GB/s and M.2 PCIe matches SATA Express at 10GB/s, with speed bumps expected for the next generation. Your motherboard will only support one or the other, so make sure which type of controller your M.2 port is connected to before buying a drive!
IIt seems soon, we know, but it looks like 2015 will be the year Windows 10 releases to the public.
Don’t ask about Windows 9, we don’t know either.
Dubious mathematics aside, Windows 10 is shaping up to be the truly modern OS that Windows 8 was meant to be. Windows 7 & 8 users will also get to upgrade for free for the first year following the Windows 10 release, which is just more evidence that Microsoft is keen to reverse some of the bad press Windows 8 generated for the company.
The key idea underpinning the new Windows version seems to be unification. MIcrosoft wants Windows 10, in one form or another, on every device you own. Your Windows phone, tablet, Xbox and Zune (not really) will now talk to each other and have a similar interface look.
There’s also the surprising announcement of a new augmented reality interface for Windows 10 along with the Microsoft HoloLens hardware platform that supports it. Apart from this really new technology no one was really expecting there are some features worth highlighting. Please remember that things could still change between now and Windows 10’s mid-2015 release.
The Start Menu is Back! That’s right, probably the single most requested feature in Windows 8 is back in Windows 10. Yes, technically the Windows 8.1 update returned the start button, but that wasn’t really what people wanted. The Metro interface isn’t actually gone, but the old desktop UI and new Metro UI are now actually integrated instead of awkwardly living next to each other. The option to switch to the full-screen Metro interface is there, which is good since it was actually great for Windows tablets and HTPCs.
Virtual Desktops A beloved feature of Linux users for almost a decade now, Windows 10 includes a virtual desktop feature. If you’ve never worked with virtual desktops you’d better know there’s no going back. It’s an especially great feature for laptops on the road, where you don’t usually have multiple screens.
It’s All Flat This isn’t really a feature as such, but Microsoft has succumbed to the same fashion sense as Apple and flattened out the UI even more. Like it or not, it seems the flat UI trend will be with us for a while yet.
A Better Command Prompt You can now enable copy and paste in the command prompt. Wow, did it really take this long to include this feature? Power users will be dancing in the streets.
Spartan and Cortana Microsoft is really embracing the Halo references in this version of Windows, but hey Bill Gates was in a Doom promo video back when Windows 95 came out, so it’s not that strange.
Spartan is the replacement for Internet Explorer, essentially the equivalent of Safari from Apple. Similarly it will be multi-device and come on Windows tablets and phones. Generally it seems that Spartan is intended to fall in line with more modern browsers such as Chrome and Firefox. We don’t know much yet, but perhaps it’ll be good enough so that people don’t just use it to download Chrome and then never touch it again as with IE.
Cortana is Microsoft’s Siri. Now however, she’ll be present on all your devices and the demo’s we’ve seen really show an improvement in natural language processing ability for the software. This shouldn’t be surprising if you’ve also seen the HoloLens demo. Microsoft has put a lot of time and money into this, they just didn’t make much noise until now.
DirectX 12 The new version of DirectX of course brings better high-end graphics, which is to be expected, but it also moves the focus to mobile devices such as tablets and phones, which is where many of us now consume 3D graphics. Windows 10 is also much more tightly integrated with Xbox. The experience of gaming on a Windows 10 PC is now likely to be closer to that of using an Xbox One in some ways. How much remains to be seen.
A Perfect 10? Most early impressions of the Technical Review release of Windows 10 seem highly positive, but until the gold master gets into the hands of the general public we won’t know if Microsoft is on to a winner this time. What we do know for sure is that Windows 10 promises the freshest experience on a PC in decades. It seems Microsoft has decided to aim for the stars with this one, we’re excited to see if they succeed.
The inevitable has happened, after blitzing the high end graphics market with the superlative GTX 970 and its big brother the GTX 980, Nvidia GTX 960 cards are now on store shelves. For most users the mid-range is the sweet spot and predictably these cards sell extremely well. So if you’re keen on the latest budget performance parts, read on for the lowdown.
The GTX 960 is the first midrange card sporting the Maxwell GPU architecture and replaces the aging GTX 760, which has been holding the fort since its release in mid-2013.
The first consideration on whether you want and upgrade along the midrange curve is the feature-set. Maxwell came with a few tricks up its sleeve, including DirectX 12 support, Nvidia Voxel Global Illumination, Multi-Frame sampled AA and Dynamic Super Resolution. None of these features are a reason to buy one of these cards by itself, but it’s nice to know some of the high-tech silicon has made its way down the price range.
The GM206 chip on the GTX 960 has about 43% fewer transistors when compared to the GTX 980 which translates to half as many CUDA cores. That’s not bad when you consider the GTX 980 is a $500+ card and the GTX 960 will retail for about $200, depending on the model. It seems you’d be getting a pretty good deal all things considered. The problem here is the GTX 970, which trades blows with the 980, but only costs about $300 to $350. The GTX 960 isn’t in the same league as this card, but the price gap is relatively tiny. The GTX 960 is about 30% slower than the GTX970 on average, depending on the application. It also takes a big cut in VRAM having only half of the 970’s 4GB. This means that high-resolution textures or other applications that need VRAM might fare much worse on the 960.
On the AMD side the GTX 960 is neck-and-neck with the Radeon R9 285, there’s almost no choice between these cards, trading blows on different benchmarks.
Things aren’t much better for the 960 when comparing it to the card it replaces. It outperforms the 760, but not by much.
Maxwell was not however about performance. The big shock came when we saw how little power these GPUs used. You could swop one GTX 780 for two 970s without making any difference in PSU requirements. In general under an average gaming or graphics load the GTX 960 uses as much power as a GTX 650 Ti. If you have a machine that currently uses something in the GTX 650 Ti performance class the 960 could be a serious kick in the pants without having to uprate the rest of the system. Many models of 960 are equipped with very quiet coolers indeed and passive cooling is a definite possibility.
Seen from this perspective the GTX 960 is a more palatable deal, but still we find it hard to recommend as long as the GTX 970 exists with that relatively narrow price-gap. Even when you take SLI into consideration it doesn’t make financial sense. When the 960 drops below the $200 mark and gets passive cooling however, it may be the small form factor or HTPC card of choice.
SATA had a good run, that’s for sure. These neat little connectors with their thin cables did wonders for aesthetics and airflow, but most of all they liberated us from the pathetic speeds of PATA drives, topping out at 133 MB/s. First generation SATA interfaces were rated at 1.5Gb/s or 187.5 MB/s. Not such a great improvement, but each successive generation of SATA was planned to double that speed and indeed SATA III now theoretically will hit 6Gb/s or 750MB/s.
That’s seemed like a pretty future proof plan since mechanical hard drives really hadn’t been improving their performance in line with Moore’s law, which is only relevant to silicon electronics such as CPUs and RAM. That remains true, the fastest SATA III mechanical drives (excluding hybrid SSD drives) fall just short of 160 MB/s. A far cry from the 750 MB/s limit for that interface.
Sata Express Connector to the right of traditional SATA ports on ASUS motherboard
Then SSDs or Solid State Drives became affordable, mainstream products. Current high-end SSDs like the Samsung 850 Pro are solidly knocking on SATA III’s upper limit. That’s one single drive eating 550 MB/s in available bandwidth. It was clear that doubling the speed of SATA III would never keep up with the improvements in SSD technology, so instead of reinventing the wheel it was decided to embrace what had been a workaround for high-performance drives: PCI-express.
For a while it has been possible to get an SSD mounted on a PCI-express card using up to eight lanes for data transfer. These drives could blow SATA III out of the water, but the interface was never meant to host primary storage devices, which meant configuration could be tricky.
So SATA-express is just that, the standardisation of the PCI-express bus for storage device use. It provides and SSD or any other compatible device with multiple PCI-e lanes for data transfer. How fast is it? You better sit down for this: 1969 MB/s.
That provides some breathing room for the future, since current SSD speeds will have to quadruple before troubling the current limits of SATA-express.
If you look at the connector for SATA-express you’ll notice that it looks like two SATA III connector glued together. That’s no coincidence. You can connect either two SATA III devices or one SATA-express device to each connector. It’s fully backwards compatible, which means there’s no reason to hold back on upgrading. Those two SATA III drives will of course only run at 6Gb/s.
Which you need to be OK with, because as of now there’s aren’t actually any SATA-express drives on the market. No one is sure exactly when the first drives will come, but it will be during 2015, unless something unforeseen happens.
If you’re in the market for a new computer or a motherboard upgrade there’s no reason not to go with SATA-express. When the new generation of drives arrive you’ll be ready to simply slip them in and be blown away.
The TitanUS X199 is our first new build featuring SATA-Express. You’ll find much more to like about it, so head over to the product section to see what this mighty machine has to offer.
Just about everyone who has a stake in computing has been keeping a close eye on recent developments in virtual reality. From gamers to professionals who create graphics and interfaces for others to use, we’ve been feeling the general shift to feasible mainstream VR.
We’ve seen movement in the mobile space with products such as the Samsung Gear VR, based off a partnership with Oculus.
Of course, so far Oculus is where the smart money was when it came to mainstream, non-mobile home VR. Lots of money, to the tune of $2 Bn from global social media giant Facebook. There have been more entrants in both the VR and AR (augmented reality) space. Microsoft has come along with the Hololens concept and Sony is bringing Project Morpheus to the PS4. So it seems that we’ll be spoiled for choice when it comes to different VR and AR solutions.
The VR space is therefore a rapidly heating industry. Although we have yet to see more than announcement and developer kits. So it shouldn’t be surprising that industry giant Valve, of Steam and Source Engine fame, would be interested in being a player at the table.
What is surprising is how hard they’ve come in. No one had any idea that Valve had put this much effort into making it’s own VR system, especially since it has shown such great support for Oculus through Steam and it’s own video games.
The system that Valve demoed to journalists at a recent expo blew many industry people away. The actual hardware on show was produced by HTC, specifically named as the HTC Vive. The two companies had been working together, but what Valve was offering was an open software and hardware standard, to allow anyone to develop a VR product that would be compatible with everything else.
The system shown consisted of three components:
A Headset
Handheld controllers
Laser scanning base stations
The headset is pretty advanced with a 1200x1080 screen for each eye and a refresh rate of 90Hz, eliminating the major cause of motion sickness. The controllers employ triggers and what appears to be the same advanced touchpads found on Valve’s Steam Controller prototypes.
Both of these components are covered in small angled dots that allow very precise motion tracking. The two base stations scan the room so your virtual hands and head are in precise 1:1 position. Furthermore, the scanners know where your real walls are, so you can get up and walk around with a virtual grid appearing to the user when getting to where the real wall is. This is the closest to a real-life holodeck anyone has managed and its a real product.
The main problem right now is that the hardware looks like, well, a prototype. Although the cyberpunk aesthetic is kind of cool. The other problem is that everything is still wired. The final product will be wireless, allow a much more free experience.
If you are in the business of providing professional graphics products there’s no way this isn’t a game changer for the way we look at designing and consuming CG products.
The laptop form factor is super popular, these days to most mainstream users a personal computer is a laptop. As computers work their way ever more into our lifestyles it makes sense that the desk-bound computer form factor becomes less of a mainstream device. Which is great if your computing needs include watching YouTube, writing a few essays and playing some Farmville. When you start to do serious graphical tasks such as CAD, CAM and CG or want to play the latest AAA the way it was meant to be you better get ready to empty your wallet.
Nothing blow up the price tag for a mobile computer like a high-end mobile GPU, not to mention the hilarious effect it has on battery life or thermal design. Finally, one of the most galling issues when it comes to laptops with graphics muscle is upgrading. We’ve reached a point where, even for high-end users, it’s no longer necessary to upgrade CPUs frequently, but GPUs are another story altogether. We’re still seeing substantial improvements in each generation of GPUs and, most importantly, there are software applications just waiting to suck up any extra GPU power new cards bring.
On a desktop machine this is no issue, just buy a new card, whip out the old one and replace it. In a laptop’s case you have to replace the whole thing, even if the CPU, RAM and storage are all still perfectly fine.
Well this might be the year all of that starts changing. Both MSI and Alienware have demoed external docking stations that connect using PCIe and contain a full desktop graphics card. Dock a compatible laptop onto it and you turn your mobile, midrange productivity laptop into a full-on gaming computer or graphics workstation.
MSI’s “Dock Station “solution uses a proprietary connector, so it has to be paired with a compatible MSI laptop. It provides the full 16 lanes for the GPU as well as four USB 3 ports and a 3.5 inch hard drive bay. The dock can house most single-GPU cards, including new ones such as the GTX 980. At $2000 (paired with the GS30 Shadow, the only compatible laptop at present) this docking station isn’t cheap. But the first time you upgrade just the graphics card instead of buying a whole new laptop, or buy a new laptop that doesn’t cost an arm or a leg because you have a GPU docking station to pair it with, it will pay for itself.
Alienware’s “Graphics Amplifier” is $300 device that only provides 4 lanes of bandwidth and none of the other extras like drive bays, but is just as promising.
We’ve had attempts at external laptop graphics before and if you know which Far East websites to peruse you can actually by far less elegant (or reliable) solutions right now for your own laptop. The difference here is that it finally looks like polished, reliable external graphics solutions are coming to market and it could not have happened soon enough.
USB is pretty boring, although many of us remember the dark days before USB when we had to struggle with so many different port types. Parallel ports, serial ports, PS/2 and many other irritating and ugly interfaces all fell to USB. It really has become practically universal, as the name says.
For professionals who do media production or other bandwidth-intensive computing tasks high speed connections to external storage or to devices such as HD cameras is vital. USB hasn’t always kept up with these needs, which is why we have other faster standards such as Firewire, eSATA and Thunderbolt. You’ll usually find these on computers aimed at video editors, 3D modelers or even programmers who have to work off external storage for various reasons.
USB 3, with its 5 Gbps top speed went a long way towards clawing that market from these more specialised interfaces and had the added advantage of being backwards compatible with USB 1 and 2.
USB 3.1 might not seem like a big deal, it’s basically the same version number, but it comes with two important changes.
The first is speed, USB 3.1 doubles the speed of 3.0 to 10 Gbps, putting it at the same level single channel generation 1 Thunderbolt connections have.
The other exciting development is the new Type-C connector which we saw at CES this year. It’s small, about as large as current micro USB connectors, but the most important aspect of the new connector is that it’s reversible. The difficulty in telling if you are putting your USB cable in right-side up has been a common complaint among users since USB 1.0, but with Type C cables it doesn’t matter. Type C cables can also work in alternate modes to carry other protocols such as PCI Express, DisplayPort 1.3, MHL 3.0 and even Base-T Ethernet.
Type-C is promising to be a versatile and perhaps even more universal connection type. There are already a number of compatible devices that we know of, such as the Nokia N1 tablet.
Type C can also handle USB 2, so devices with Type C connectors don’t also need USB 3.1 controllers. You don’t get the extra speed or electrical power of USB 3.1, but you do get that awesome reversible connector.
USB 3.1 looks like it will be serious competition for Thunderbolt and if it can win over professional and casual users alike we might see support for Thunderbolt dwindle. Simpler is always better for the user and having an even more universal USB can only be a good thing.
How excited are you for USB 3.1? Are you holding out for it before you upgrade? Let us know in the comments.
The world of professional recording has changed massively since the PC multimedia revolution. You don’t need a multi-million recording studio to make an album or radio show (“podcast” to you cool kids), you just need a modest modern computer and some digital recording gear. You can have single microphone setups and multiple simultaneously recording audio sources. For the more elaborate setups special input hardware and audio processing equipment might be needed, but whatever your configuration, at the heart of it is a dead-ordinary personal computer.
Or is it?
It’s true that an average computer, say a quad-core with 8GB of RAM, has more than enough power to cover most home recording situations. If you’re making podcasts or editing your indie band’s multitrack a computer like this can get the job done. If you want to do audio work at a professional level though, you shouldn’t just take these sorts of specifications into account.
Reliability and quality, especially if audio editing is what you do for a living, is also very important. That’s not something easily expressed in a spec-sheet. It’s not perfect, but your first clue is often that the price of a computer is too low. Many mainstream computer manufacturers will pour a computer’s budget into headline specification like RAM and CPU model because these are easier to market to non-technical buyers. Other, less sexy components such as the motherboard or power supply suffer and are usually less expensive, lower quality and off-brand components.
That’s why many lower end professional computers, the kind that would work for audio editing, seem inexplicably expensive next to similar (on paper) computers at places like Walmart.
These mass-produced machines may seem like a bargain, but in the long run you’re better off buying a professional grade computer.
For one thing, all of the components are high quality. Reliable hard drives, RAM and PSUs aren’t cheap. An electrically noisy PSU can wreak havoc on your recordings with hums and buzzes that you can’t get rid of completely. Bad RAM, low end-hard drive? Say hello to stuttering recordings and hiccups with multitracks. You might even face an inability to get multiple tracks to stream in sync and various other small but critical glitches. Professional audio software is notoriously sensitive to minor instabilities that don’t affect video games or office work in an obvious way.
Even when components are all good quality brands and models, it’s still possible that your computer has a dud in it. Which is why professional computer builders do long burn or stress tests so that any faulty component will fail before the computer is shipped. That’s a service included in the price, your Walmart special might have a return rate of 1 in 10 PCs, but because of the total volume they still make a profit. Pro-builders like Titanus focus on quality over quantity, which is why we burn test our builds for 24-48 hours before they go to a customer.
Professional computers are also quiet, some even completely passively cooled, which means that some semi-pro studios that don’t have a separate soundproof chamber are viable for high grade recording. To illustrate, look at our own X189 audio workstation, a hexa-core computer with 16GB of RAM. As a base computer it’s great for editing simple to moderately complex multi-track audio. Every component is appropriately balanced in terms of price and quality compared to the rest of the computer. If you wanted to handle more threads and thus more streaming tracks you could scale it all he way to 18 cores and of course up the RAM for larger projects too. You’d also swap the Primary drive for an SSD and have the audio data on a separate secondary drive to eliminate latency or read/write contention.
The system, although already very small and quiet, can be made more so with an ultra-quiet PSU option and a water-cooling setup. All put together and burn-tested by us.Not to mention, professional computers also come with way better after-sales support, something the part-time high school kid at the local megastore isn’t going to provide.
Buying a professional pc, especially if it’s at the lower end, can seem like a waste of money if you don’t know where the money goes, but every cent is put into making something that’s consistent, reliable and well-rounded.
These days most people who need to shift around files that are too big for DropBox will use USB flash drives, but even USB 3 flash drives top out at just over 200MB/s read speeds, with writes coming in a little lower. For professionals, who might work with huge datasets, massive project files or simply things that are too confidential for cloud storage there’s a definite need for external storage that’s both fast and robust. If you’ve ever dropped an external mechanical drive you know the results are hardly ever pretty.
Samsung seems to have come up with a pretty neat solution: An external USB 3 SSD that’s really, really fast. The Portable SSD T1 comes in three sizes: 1TB, 512GB and 256 GB.
In terms of speed, sequential reads and writes can hit up to 450 MB/s over USB 3. Obviously random reads and writes are going to be slower, but portable storage is exactly the scenario where sequential performance matters most.
Furthermore, the T1 supports 256-bit AES encryption with password protection, making it a good choice for confidentiality.
The 1TB version is prices at about $570, which seems like a lot, but if you tried to buy a 1TB USB 3 flash drive today it would run you almost $890 and still be slower.
The drive is bigger than a flash drive, but smaller than a 1TB mechanical drive, in fact it’s eminently pocketable.
If speed is more important than size you can pick up the 512GB and 256 GB versions for $300 and $175 respectively.
For the mainstream users the Samsung T1 might be a bit niche and not really make all that sense when looking at much cheaper solutions only in terms of capacity, but for professionals for whom time is money we think the T1 makes perfect sense.
For gamers single-GPU performance is still a very important deal. If you thought it was hard enough to get software to scale evenly over multiple CPU cores wait till you hear the moaning about multi-GPU setups. Despite major leaps in getting multi-GPU setups to work reliably and efficiently it’s still a bit clunky to get two separate chips to double the performance of one. So, in a nutshell, we still care about which single-GPU card is the fastest.
Not too long ago we were talking about the GTX 980, which currently remains the king of GPU performance, although we’d still pick the GTX 970 9 out of ten times ourselves. It seems now though that the belt might be heading for a new owner as Nvidia teazes the Titan X, by all accounts a bruiser of a card.
We are still weeks away at this point from getting all the details, but it seems many of the latest tech and VR demos at the 2015 Game Developer Conference were running on Titan X cards. Why not just use GTX 980s? We, like many must assume it’s because the Titan X spanks that card in terms of real-world performance. Those Unreal Engine 4 demos were very impressive after all.
So for now this is what we think we know, based on various tech media reports, although this could all be wrong. Here goes:
The GPU is called the GM200 and is a 28Nm Maxwell part
Cards seen at the conference were using six and eight pin power, so it’s in the same league as previous top-end cards. Maxwell is pretty efficient though, so this isn’t a surprise.
Eight billion(!) transistors. That’s a possible 50% increase in shader processors versus the GTX 980
12GB of frame buffer, not a crazy number for a Quadro card, but insane for a gaming product.
It’s going to be expensive - we’re just going on a hunch here.
Previous Titan cards have not disappointed and it seems as if Nvidia will be going all out on this one. It’s also once again becoming harder and harder to decide if the advantages of a Quadro or FirePro workstation card are worth it in the face of ultra high-end cards like these.
Are you excited for the Titan X? Would you consider one for professional use? One thing is for sure, the next few weeks are going to be long one’s.
So we’ve been hearing good things about the powerful mobile Nvidia cards, the GTX 980M and 970M. Still, these days thin and light is the order of the day and many users have been waiting to see what’s happening at the middle of the 900-series mobile range.
There aren’t really benchmarks out yet since the cards were announced three days before the time of writing, but the specifications are now known.
The GTX 960M has 640 CUDA cores,a 1,096 MHz base clock and GDDR5 memory clocked at 2,500MHz on a 128-bit bus.
The GTX 950M has the same number of cores but a 914 MHz base clock speed, 1,000MHz (DDR3) or 2,500MHz (GDDR5) memory clock speed.
Both of these parts come with a number of new features for mobile, mainly ways to improve battery life, which is always welcome. DirectX 12 support is of course a given.
Although the benchmarks aren’t in yet the general feeling is that these cards will allow gaming on QHD devices at medium-high settings at 60 FPS. However, Asus has paired the GTX 960 with a 4K display, which frankly might be too much for this part and you’ll have to play things at non-native resolutions.
Keep an eye out for the new ultrabooks that will be sporting these chips and of course benchmarks to see how they perform in real life.
The Difference Between Professional and Mainstream SSDs
Published:
SSDs are now becoming more common in consumer computers and are almost mainstream when it comes to professional computers or high-end gaming rigs. We often see them used as OS drives with a large traditional mechanical drives doing duty as media and document storage.
On paper, SSD technology has spinning magnetic drive technology beaten in every way except price per GB. They are much faster, much less prone to failure and not sensitive to physical forces to the same degree as mechanical drive, but not all SSDs are created equal.
First, ALL SSDs degrade when writing data. After a certain number of writes to a given memory element that element will fail. Early SSDs could be destroyed using software torture tests in a distressingly short amount of time. These days the endurance of consumer grade SSDs has been improved immensely through better manufacturing processes, better firmware and better operating system support.
But there’s still a big difference between the write endurance of mainstream drives and professional- or enterprise- class drives. So if you are intending to use a drive for jobs that require lots of data writing operations you might want to think twice about which SSD you fork out for.
Samsung EVO (left) uses SLC memory / 3 Year Warranty - Samsung Pro (right) uses MLC memory / 10 Year Warranty
There are three types of flash memory used in SSD drives: SLC, MLC and TLC.
SLC Single Level Cell flash memory is the simplest, fastest and most robust flash memory you can get. It only stores one bit per cell of memory and is structurally very simple, hence the reliability. The firmware also doesn’t need to do anything complex to make SLC work compared to other types of flash, so there’s little processing overhead to speak of. SLC is however prohibitively expensive and because of the lower data density doesn’t come in huge sizes. In terms of write endurance though, this technology is virtually bulletproof. Generally SLC will tolerate 10x the writes of MLC, which we talk about next. MLC Multi Level Cell flash memory is the kind you are most likely to find in an SSD you’d actually buy. These units have two bits of memory per cell and therefore are cheaper and have larger sizes, but do not match up in terms of speed and reliability compared to SLC.
To give you a more concrete idea, the rated write endurance on the Mushkin Reactor 1TB SSD is 144TB. That’s on 16 nm MLC technology. That translates to about 130 GB of writes per day for three years (the warranty period) before drive failure. That’s perfectly OK for a desktop PC or even for professionals who don’t do work that writes a lot to the drive, but would you put that in a server? What about writing large working files such as video editing projects or other storage heavy creative outputs?
To help mitigate this issue there’s a special version of MLC known as eMLC, the “e” being short for “enterprise”. eMLC is MLC that has been enhanced to take more write cycles before breaking down. So, depending on your needs, eMLC drives might hit the sweet spot between price and performance when looking at enterprise applications.
How much of a difference does eMLC make to endurance? The Samsung 850 EVO 500GB unit, which is a great consumer drive, has a write endurance of 150 TB. The Samsung SM825 400GB eMLC drive has a write endurance of 7000TB. That’s a huge difference, to burn out the drive within its five year warranty period you’d have to write 3.8 TB of data per day, every day to the device. Clearly eMLC is something to watch out for if write endurance is going to be an issue to you.
TLC Triple Level Cell flash memory steps up the data density to three bits per cell, which as you’d expect lowers everything else from speed to write endurance, but at a cheaper price and way faster than a traditional HDD.
This low-end flash is best used for extremely light client workloads or drives that will mostly be read from, such as drives holding video for streaming or a file server. Remember that, write endurance aside, SSDs have a MTBF (mean time before failure) measured in decades or centuries, so they are much more reliable than magnetic drives. TLC has its place, but it isn’t in the professional or write heavy enterprise realm.
Measure Twice, Cut Once
If you are planning on getting Titan US to build you a server or workstation, discuss your plans for the machine with us so we can make sure that you fit the right kind of SSD to your computer. Apart from write endurance there are factors such as speed, capacity and price to take into consideration. Picking the right SSD can be a challenge, so don’t hesitate to speak to us. It could save you a lot of money or a big headache.
The story of computing electronics is one of ever smaller components. From early vacuum tube computers such as ENIAC that 1800 sq feet and ate an astounding 150 KW of electricity to the 5.96B transistor Haswell Xeon E5 that measures at 662mm² and sips on a mere 135W depending on the exact model and clock speed. That a huge improvement in a mere three quarters of a century and cements the rule that when it comes to electronic components bigger is not better.
When it comes to high-end graphics cards though, you’d think this wasn’t the case. High-end cards have just been growing and growing in size. In the early days a card like the 3dFX Voodoo 2 would weigh in at about 8 inches, have no heatsink or active cooling and barely fill the width of a single slot. These days something like the Nvidia GTX 980 Ti weighs in at around 12 inches with some manufacturers using a huge triple-slot design.So if you want something that’s going to fit into those popular new little M-ITX cases you’ll have to settle for a mid-range GPU. It’s size or power, pick one, right?
Well, AMD seems to have chucked that rulebook straight out of the window with the R9 Nano graphics card.
After stomping on it and setting fire to it first.
There are two key facts that you’ll want to know. The first is that this card outperforms a Radeon R9 290X, the second is that it’s 6 inches in length. AMD considers this a flagship card and the $650 asking price reflects this, but if you are looking at building a high-end mini-ITX computer for gaming or 4K media applications there is effectively no other choice on the market. Even the GTX 970 mITX is solidly spanked by this card, unsurprising since it trades blows with the GTX 980.
The Nano has the same number of stream processors as the new flagship Fury X card, based on the new Fiji GPU design. Although it doesn’t clock quite as high, it’s pretty close given the thermal constraints of the design. The Fury X hits 1050 MHz, but the Nano spends most of its time at 900MHz with a peak of 1GHz.
This new GPU is certainly an impressive piece of silicon, but the real star of the show is AMD’s (frankly revolutionary) High Bandwidth Memory (HBM). We’ll be talking about HBM in detail in another post, but the gist of it is that this memory is stacked in 3D with a relatively slow clock, but a massively wide bus. We’re talking 4096-bit buses here with 512GB/s of bandwidth. This is clearly in a league all its own and the Nano comes with 4GB of it.
Thanks to Fiji’s doubling of performance-per-watt (AMD’s claim) the Nano only uses a maximum of 175W, requiring a single 8-pin connector. The reference card also aims for a temperature of 75 degrees celsius and a noise level of 42 decibels which is about as loud as a library or bird calls.
As used to rapid advancement as we are, it still boggles the mind that this tiny, quiet card can stand toe-to-toe with a GTX 980 and more often than not clearly outperform it.
While the Nano is exciting in its own right, this bodes very well for the immediate future of consumer GPUs. Especially for mobile computing, where space is always at a premium. It’s also great to see AMD sucker punching Nvidia again, which can only be good news for consumers.
What do you think of this amazing little card? Let us know in the comments below.
The integrated circuitry you’ll find in your CPU or GPU is an astounding technological feat. Billions of microscopic logic gates squeezed onto a semiconductor wafer only a few hundred millimeters across. However, that circuitry is two dimensional, which is why die sizes are expressed at square units and not cubical ones. 3D integrated circuitry on the other hand, stacks multiple connected silicon wafers on top of each other, packing much more into a more efficient space.
Until now we haven’t really seen many commercial applications, although believe it or not Intel built a 3D Pentium 4 back in 2004. Of course, Samsung’s 3D V-NAND can be found in its 850 PRO SSDs which has also brought a significant improvement to the performance of that technology.
Now, AMD has put 3D circuitry products into the consumer space. The R9 Nano graphics card from AMD is a flagship, high-end graphics card performing at about the level of a Nvidia GTX 980, but comes in at only six inches in length. This is possible mostly thanks to the new High Bandwidth Memory (HBM) that AMD has created.
This memory enjoys the advantage that 3D stacked circuits: a smaller package size, significantly lower power consumption and a massive improvement in bandwidth. While the GDDR5 memory in the GTX 980 is 256 bits wide, the HBM on AMD’s card is 4096 bits wide. This gives the memory 512 GB/s of effective bandwidth, despite only being clocked at 500Mhz. In other words, unlike GDDR which uses a narrow pipe (bus width) but very high flow (clock speed) HBM does the opposite. For one thing this means parts than run much cooler, with the total amount of bandwidth better than the GDDR solution. Time will tell if speeds on this wide bus can also be ramped up, but it seems that memory certainly won’t be a bottleneck. One current limitation for HBM is a 4GB memory limit, so it’s unclear when we would see Quadro or FirePro type cards with 12GB HBM allotments. This is a limitation of the stack height, the number of stacks the GPU can can connect to and of course the memory density of the actual stack layers. It also remains to be seen if HBM will will meet the memory precision requirements for professional workstation cards.
What is clear, is that this is a practical, reliable technology that might be set to change the size, energy and performance limits of consumer computing devices in a significant way. HBM is very impressive for a first-generation product and can only get better from here.
What do you think about HBM? Let us know in the comments.
The new Skylake series of CPUs represent a narrow miss for Intel, as they went off their famous “tick, tock”release schedule thanks to problems with the 14nm production process. The 14nm Broadwell CPU line was released late, but Intel chose to release the “tock”architecture refresh on time, so we’ve barely had time to get to grips with Broadwell before it was replaced. The next generation will therefore be based on a smaller process, which means that Skylake represent the pinnacle of Intel's 14nm technology.
Refreshes of existing production processes usually focus more on better power efficiency and higher quality yields rather than outright performance improvements.
To get an idea of the performance delta we’re talking about here, compare the Skylake i5-6600k to the Haswell i5-4670k. The Skylake CPU gives an additional 17% of performance in benchmarks on average. Not earth-shaking, but the Haswell chips were certainly not slouches in their own right. That type of boost might not be worth shelling out for a upgrade, but if you are buying new or upgrading from an older generation system Skylake is clearly the better performer.
One significant difference is the inclusion of a dedicated H.264 video hardware decoder, which means video decoding duty can be performed by the CPU instead of the GPU. This is part of the overall strategy to improve power usage. Skylake CPUs can now directly manage their own power states thanks to the new Speed Shift technology, for better overall power usage. These improvements are clearly going to have the most impact in the mobile computer space.
Skylake also brings support for DDR4 (and still DDR3), which might be important for tasks that rely heavily on memory bandwidth, especially in light of improved branch prediction for the Skylake architecture.
The onboard controller also includes Thunderbolt 3 support, which means your Skylake laptop is ready for things like external Thunderbolt graphics over USB-C, should the manufacturer allow it.
The high-end binning on these parts also seem exceptional, with some publication claiming an overclock of 4.8Ghz on the 4 Ghz i76700K. While we never recommend overclocking for professional computers, such a stable overclock suggests great reliability at standard speeds.
Of particular interest is the launch of Skylake E3-1500M v5 mobile Xeons. This means that Intel has ushered in the era of mobile workstation CPUs, almost under our noses. Mobile computers with Xeon CPUs in the past have used desktop parts in custom chassis designs. These were, to put it mildly, not very battery friendly machines. These were portable workstations rather than mobile ones. Were still waiting on details, but expect high-end mobile computers with support for enterprise security, ECC memory and other features that set workstation CPUs apart from their desktop counterparts. Coupled with the aforementioned external graphics using Thunderbolt 3 and we could be looking at a much lighter load for the modern professional computer user.
Mineral Oil Computer Cooling: Components go for a swim
Published:
If you want to cool your computer components you basically have two choices: Air cooling or water cooling. For most people air cooling using a heatsink and fan combination is the way to go. Modern copper heatpipe and fan combinations are relatively quiet, efficient and reliable. Water cooling solution are really the elite cooling solution for daily use, where circulating water in a closed system moves heat to a radiator, which may be active or passive (with fans), but is generally much quieter than a multi-fansink solution.
Alternatively, you can fill an aquarium with non-conductive mineral oil and dump all your components into it. That’s right, just completely submerge them, fans and all. Our instincts tell us that this should be a disaster, but since the oil isn’t conductive it’s basically just thicker air as far as your components are concerned.
Cooling electrical equipment in oil is hardly a new concept. Electrical transformers can be cooled in oil, some supercomputers (such as the Tsubame) are oil cooled and of course high performance desktop computers can also be oil cooled.
What are the advantages though? For one thing, immersing your computer components in oil means that all components are cooled, not just the ones with passive or active heatsinks. This helps with increased longevity and stability. Oil cooling does not provide the instant and drastic drop in temperature of water cooling, but the oil has much, much more heat capacity than air or water. Once it reaches a stable operating temperature it stays there. Making it a viable cooling option for long-term computing tasks that take many hours or days. In fact some companies do data centre overhauls that convert existing server blades into oil-cooled versions. It is not however suitable for high temperature overclocking.
So the heat goes into the oil, but what then? A pump circulates the oil through a radiator which then transfers it to the air. In server environments the radiator can be outside the building, but for a desktop unit the radiator can simply go on the back of the tank.
Tank? Yes, most custom builds of oil cooled computers use watertight aquaria. Which means you can do lots of interesting things with lighting and aesthetics.
One very important issue to note is that you can’t submerge DVD drives or spinning magnetic disks. These need to be in air to work. So they need isolated drive cages is you need to use them. SSDs, being electronic, are unaffected. So they can go in the tank with everything else.
Also take note that any rubber based insulation will dissolve in the oil, so make sure nothing important relies on a rubber component. Especially wire insulation.
So, to recap, these are the basic components that go into oil-cooling a PC:
A watertight container to mount components.
Specialist non-conductive mineral oil.
A pump to circulate the oil.
A radiator to dissipate the heat
Some fish to swim around the tank (don’t actually do this, the fish will die).
The internet is filled with kit components and how-to guides for oil cooling projects. Oil cooling a PC is a completely different experience and can make for both a great conversation piece and a functional system.
Would you try oil cooling? Have you ever built a system like this? We’d love to read your comments and see the pictures.
Silicon, integrated circuits have been advancing at an incredible pace of the last few decades. CPUs, RAM, GPUs and every other solid state technology has benefited from these advances, but secondary storage devices such as hard drives have not had anything comparable in their development. Although the controller chipsets and busses have improved along with everything else, there is a limitation to how dense you can make the magnetic bits on a drive platter, limits to how fast those platters can spin and limits to how quickly your read/write head can get where it needs to go. When flash memory came to market it wasn’t yet a threat to the highly reliable (in comparison) mechanical drives and certainly not when it came to price per megabyte figures.
In other words, mechanical drives had a massive head-start in the market, but solid state memory such as flash was running a much faster race. Today the only place mechanical hard drives still have an edge is price-per-megabyte. Solid state drives are now at least as reliable, more power efficient, much faster and available in large capacities. Indeed, earlier this year Samsung revealed a 16TB 2.5” SSD, for the low, low price of (an estimated) $7000.
If money's no object, then solid state drives are your only choice, and when it comes to Samsung’s premium Pro series of drives you better have deep pockets. The new Samsung 950 Pro SSD with V-NAND, NVMe and M.2 interface will cost you a cool $350 for the 512GB version.
For that money though, you’ll get one of the fastest storage devices available today. The stated sequential read speeds of the drive are an eye-watering 2500MB/s and write speeds are equally impressive at 1500MB/s.
These speeds have been achieved using the same 3D V-NAND chips employed in the 850 Pro that came before. The other part of the puzzle is the specialized non-volatile memory host controller (NVMe) that is designed from the ground up to work with low-latency flash memory.
The drive uses very little power, ranging from 1.7W at idle to 7W in burst mode. Half a gigabyte of DRAM and AES encryption support round out the features of the drive.
Samsung will warranty the 950 Pro for 200 terabytes worth of written data, so it’s not quite the right fit for data centre or write-heavy server use, but for high end workstation applications you’d be hard pressed to find a better choice at this price.
CPU get hot, really hot and current cooling technologies are not great at getting to that heat. Since the CPU has some components that get hotter than others and the heat spreader is pretty inefficient at transferring that heat, your CPUs thermal limit is basically the limit of the hottest component, not the average temperature across the die.
A team of Georgia Tech researchers have now shown it’s possible to get that cooling fluid right where it’s needed by cutting microfluidic passages into FPGA devices such as CPUs in existing computers. This technique gets the fluid within a few hundred microns of the actual transistors. This is orders of magnitude better than the many millimeters of material between these components and the water in current water cooling setups. How well did it work? Using a 28nm FPGA device as a testbed the team was able to beat air cooling solutions by 60%. There is no heatsink, no cooling block, just an inlet and outlet pipe connecting directly to the die itself.
The test system used water at 20C and a flow rate of 147 ml/m. A flow rate many time that of existing water cooling solutions, where flow rates are measured in litres per minute, typically in the 3 to 4 L/m range. The chip was cooled to less than 24C, compared to the 60C achieved via standard air cooling.
This research is unlikely to find its way into desktop or even high-end professional setups any time soon, but it has potential for new high performance computer design. SO don’t be surprised if we see a prototype supercomputing cluster or datacenter using this technology in the next few years. The researchers are also speculating that this approach could allow for chip stacking with short, high bandwidth interconnects. Another indication of supercomputing potential at room temperature.
This could also mean a long awaited jump in clock speed. We’ve seen overclockers achieve insane clock speed figures for brief spans of time using liquid nitrogen, but perhaps microfluidics holds the key to breaking the 4Ghz barrier that desktop CPUs seem to have hit.
We’ll be keeping a close eye on this line of research without a doubt.
First Ever Nano-photonic Memory Chips Created in the Lab
Published:
Electronics, especially high-density integrated circuits are hitting a wall in terms of speed and performance. Although every time we think we’ve finally hit that wall so far there has been a new development that provides a little breathing space. Nonetheless, the fact is that eventually we will reach the end of what we can do with the electron as a computing method. There are however a number of different computing methods that look to take over the performance crown when electronics can do no more. Quantum and DNA computers are two examples, although don’t hold your breath for these technologies as everyday computing replacements. What is a possible replacement for the electronics of today is something called photonic computing. This uses the photon, a particle of light, rather than the electron. For one thing, photons are much faster than electrons. After all, photons move at the speed of light in a vacuum, because they are light.
Whether photonic computing will eventually supplant semiconductor electronics is an open question. There are many things electronics currently do for which there is no photonic equivalent. To match the logic functions of electronics with photonic technology each of these functions need to be replicated using photons. If researchers can pull of these multiple challenges we could have computing devices as much as 100 times faster than the best computers we use today.
One group of scientists may have cracked one of these problems: non-volatile photonic storage. In other words, a photonic equivalent of something like flash memory. Until now photonic components that stored information for computing were volatile. If the power went, so did the data. Using the same materials that re-writable DVDs and CDs use at a nanoscale level they were able to store data at a relatively high density and at high speed within a photonic chip. They were able to store 8 bits of data at a single location, which is already a significant improvement of binary electronics.
The team claims that existing manufacturing technologies such as photolithography can be adapted to manufacture such photonic components and that even the prototype matches existing electronics in terms of speed and power needs. However, it is still physically far too large to be a viable alternative to flash memory.
Still, photonics still looks to be the most likely successor to present day computing and this is one important step towards that goal.
TITAN COMPUTERS FINANCING
Pay Over Time
Clear, Transparent terms. Fair rates. No Prepayment penalties.
Please note, your financial institution may charge you one or more non-sufficient funds or overdraft fees if any loan payment exceeds your account’s available balance. If you pay with a credit card, your credit card issuer may charge interest and/or fees.
*Subject to approval of credit application. Rates range from 0% to 29.99% APR, resulting in, for example, 36 monthly payments of $32.26 at 9.99% APR, per $1,000 borrowed. APRs will vary depending on credit qualifications, loan amount, and term. Bread Pay™ loans are made by Comenity Capital Bank, a Bread Financial™ company.
Instant Financing Information
The REAL Difference Between Workstation and Desktop GPUs
The REAL Difference Between Workstation and Desktop GPUs
Published:
Consumer desktop-grade hardware has gotten so good these days that many people question the need for expensive workstation-grade parts in their, well, workstations. This question is understandable, since the workstation version of a particular GPU may cost four times as much when compared to the desktop card. This makes a kind of sense if you only look at the two types of cards in terms of performance. The GPU you find on a Quadro and Geforce card of the same microarchitecture generation and version are essentially the same in terms of performance figures, or at least very close. Often the desktop part will even be faster in terms of peak performance.
This is however a false comparison as the two types of card are built for very different applications and there is much more embedded in the additional cost of these professional devices than may at first be apparent to you as a consumer.
VS.
In terms of hardware, workstation cards are geared more towards stability and efficiency than outright performance. A high-end desktop card aimed at gaming usually only sees a few hours at a time of maximum usage. With a workstation card a typical workload may include running at 100% for several days on end rendering 3D models, 3D animation or GPU-accelerated scientific and engineering applications. If you did this to a consumer card you would run the risk of board component failure and data errors.
The supporting components such as memory and capacitors on a workstation board are a much higher grade than that found on desktop parts. The GPUs themselves are also binned for lower voltage and better stability. In other words the best individual chips are separated for use in pro-grade cards. The typical gamer does not care about (or even notice) small graphical glitches while playing, but similar deviations in pro-level applications can cause project delays and failure, costing thousands of Dollars in lost productivity.
Workstation grade cards also have features that the desktop market generally does not care about. This includes things like (but are not limited to):
Increased color depth, more colors can be addressed at once.
Double precision, where calculations are to more decimal points.
Frame lock and genlock, where multiple monitors can be synchronized. Even over multiple cards.
As you can see, just from a hardware perspective, there is already going to be a difference in these two types of cards just from the binning and feature divergence at play here. That still does not explain the huge price difference in its entirety though, to understand the rest of the story we have to talk about the relationship between professionals that use these cards, developers that write the software for these professionals and the companies that develop the cards.
GPU manufacturers work closely with the people that develop software for professional applications such as Maya or 3DS Max. They work to optimize the relationship between the software and drivers so that maximum performance and stability can be achieved. This represents a big investment on the part of the GPU makers, who have to commit time and manpower to these projects. Those significant development costs must be recouped and the market for professional cards is much smaller than that of the general gaming and desktop computer market. So the already disproportionate R&D investment must now also be split amongst a much smaller group of customers. On top of this, there is usually round-the-clock support for users of these cards directly by the manufacturer. That ongoing support must also be financed. These are the unseen components of the cost difference when looking at desktop and workstation cards.
We hope you now have a better understanding of the distinction between desktop and workstation cards and that you will have an easier time making a choice between the two when taking your needs into consideration.
Remember that when you buy a workstation computer from TitanUS we are always on-hand to personally advise you on what hardware to buy based on your specific needs and budget, so feel free to give us a call or drop an email.
How do I buy/redeem a gift certificate?
To purchase a gift certificate for someone, click here. If you are the recipient of a gift certificate and would like to redeem your gift certificate, click here.
How do I find my product?
To find the product(s) you're looking for, you may (1) use the navigation menus on the top, left & bottom of our website. (2) type a keyword into the SEARCH box. If you have any trouble locating a product, feel free to contact customer service for assistance.
How do I navigate the site?
To navigate this website, simply click on a category you might be interested in. Categories are located on the top, left & bottom of our website. QUICK TIP: Place your mouse cursor over anything you think could be a clickable link. You'll notice that anytime you scroll over something that is a link, your mouse cursor will become a "hand". Whereas scrolling over anything that is NOT a link will leave your cursor as an "arrow". You may also type a keyword into the SEARCH box to quickly find a specific product. If you have any trouble locating a product, feel free to contact customer service for assistance.
How do I use a coupon?
After adding items to your cart, click the "View Cart" link at the top of this site to view your cart. At the bottom of the shopping cart you'll see a box where you may enter your coupon code.