Wednesday, February 23, 2011

Chrome May Lose Its URL Bar

Could Google be dropping the URL bar next? It could.
It's noZoomt like Chrome is a particularly heavy browser as far as its UI is concerned, but Google is apparently thinking about removing even more elements from Chrome's surface to give web content more space. In this latest move, there is the idea that the URL bar, called Omnibox, could be eliminated from the browser UI, which would give Chrome more vertical space for web pages and applications.  
However, it isn't a done deal yet and Google specifically mentions that users would have an opportunity to switch between the old "classic" layout and the new "compact" version. And even if the URL bar disappears, it is not eliminated in its entirety. The URL bar would simply be integrated into open tabs and could be edited in those as well.
Strangely enough, Google is planning to do more with the URL bar in the future than it has in the past, which questions the chances of the removal of the URL bar. Chrome 11 or Chrome 12 are set to receive an Omnibox extension that will allow users to launch web apps directly from the URL bar. 

IE9 To Launch On March 14

We are a bit speculating here, but we have a very good reason to believe that Microsoft's new browser will launch during festivities held at SXSW on March 14.
ZoomWe are a bit speculating here, but we have a very good reason to believe that Microsoft's new browser will launch during festivities held at SXSW on March 14.
Microsoft sent out invitations to select journalists last week and invited bloggers to participate in the event that will "celebrate the beauty of the web." It's not just the event, it is also the fact that Microsoft's advertising team has been extremely busy and has been, as we hear, scouring the web for favorable quotes to be used in the launch campaign.
Also, March 14 would be just about one year after the introduction of the first Platform Preview of IE9 and we remember that Microsoft also used about one year to take the development of IE8 from start to finish. While we have no official confirmation we feel confident to say that the launch of the browser is imminent and will, most likely, take place on March 14.
In the meantime, we enjoy listening to a battle of words between Mozilla's Paul Rouget, who recently claimed that IE9 is an old browser. Microsoft's Tim Sneath responded by lecturing Mozilla how a modern browser should really look like.

U.S. Military Cyber Chief Calls For Cyber Force

The nation's military cyber chief says it is time for greater investments in STEM (science, technology, engineering, math) programs to keep up with other countries.
ZoomIn a speech held at the RSA Security Conference, General Keith Alexander, Commander of U.S. Cyber Command and the director of the National Security Agency (NSA) said that the United States need to create a "cyber force" to be able to withstand attacks targeted at the nation critical infrastructure. Threatpost quoted the General stating:  "We need to concurrently push STEM and educate the public about what goes on these networks so that we can fix it as a team. We need your help to do that."
According to Alexander, a cyber force could be a band between government agencies and the private sector, in which early warning signs could be detected and defend the country against "sophisticated adversaries and malicious insiders." He envisions a team-based collaboration to counter trends in a world where "cyber offensive- and defensive operations are the keys to military victory."
Alexander called for a greater focus on education at the elementary and secondary level:  "We can't let the advantages we've had in the past erode the future," he said.

Report: HP TouchPad to Go on Sale in April

Perhaps we won't have to wait until summer, after all?
Zoom
It’s been a few weeks since HP revealed the TouchPad, a 9.7-inch tablet packing WebOS 3.0. Though pricing and availability were not elaborated upon, the company whipped people into a frenzy talking up the tablet and its new version of Palm’s WebOS platform. However, today we learn that the HP TouchPad could come as soon as April.

According to Digitimes, HP is planning an April launch for the 9.7-inch device with shipment deliveries hitting sometime at the end of next month. Citing sources from HP's upstream component partners, DT reports that HP is aiming to ship between 45 and 48 million notebooks and tablets in 2011. Subtracting HP's notebook shipment forecast, it would appear as though HP is projecting 4-5 million units shipped for the TouchPad series.

Still no word on pricing but we’re sure it’ll be in or around the $400-500 mark that’s fast becoming par for the course in the tablet world (barring the Motorola Xoom of course).

Lenovo Launches Six New ThinkPad Not

Lenovo today updated its ThinkPad series giving the laptop line a sleek new look and Intel’s latest Sandy Bridge chips.
Zoom
Though the last couple of weeks have been more LePad than ThinkPad, Lenovo has shaken things up this morning with the launch of the ThinkPad T, L and W notebooks. All told there’s six new notebooks, the T420s, T420, T520, L420, L520 and W520.

In the updated T series, Lenovo is promising improved boot times, along with improved battery life. While the super-slim T420s will supposedly deliver a 30 percent boost in boot times, the T420 will deliver up to 30 hours of battery with a standard 9-cell (15 hours) and an optional 9-cell slice battery. The 15.6-inch T520 is a little bulkier but packs the same power as its two smaller siblings. All boast the latest Core CPUs, (i5 and i7) and Nvidia graphics (GeForce 4200M GPU, 1GB of VRAM).

Lenovo is touting the W520 (pictured) as a mobile workstation and with options for up to the quad-core versions of Intel’s Core i-series, Fermi graphics, support for up to 32GB of DDR3 and USB 3.0, the W520 definitely lives up to that classification. It’s got a 15.6-inch display, so it’s not a behemoth, but it’s a hefty old girl with a weight of just under 6 lbs, so it’s probably for the best that this isn’t a 17-incher.

Lastly, there’s the 14-inch L420 and 15.6-inch L520. These represent the newest arrivals to the Lenovo’s entry-level business line. Both come with the new Core i-series chips and can be configured for up to 8GB of DDR3. No USB 3.0 here, but USB 2.0 out the wazoo with four ports on each.

All six machines will be available in March and pricing for the T420s, T420, T520, L420, L520 and W520 starts at approximately $1,329, $779, $909, $719, $719 and $1329, respectively.

Confirmed: iOS and Android to Get Minecraft

Minecraft is about to get a whole lot more popular.
Zoom
Apple has long touted the proficiency of the iPod, iPhone and iPad when it comes to gaming and the latest news on the iOS gaming front is sure to help Cupertino’s cause. Minecraft studio Mojang has confirmed that it will be releasing iOS and Android versions of the game later this year.

Markus Persson, founder of Swedish Minecraft developer Mojang, this week confirmed to Gamasutra that the hugely popular indie game will be heading to Android and iOS devices. Mojang has a new recruit, Aron Neiminen, to develop the port but Persson warned that the iOS edition would not receive every update that users of the original browser and download version see. Instead, the suitability of each feature will be assessed for iOS’s touch-screen platform.

Meanwhile, Kotaku’s citing Mojang head of business development, Danial Kaplan, who says they’re also working on a port for Android. No word on whether or not Neiminen will be working on that too, but the Android version is expected this year as well.

MacBook Pros to Launch on Steve Jobs' Birthday?

Zoom

Steve Jobs might be off on sick leave, but the Apple CEO has been seen out and about around the Apple campus and remains involved in day-to-day operations. Now it looks as though Apple is planning to celebrate their CEO’s birthday with a new line of MacBook Pros.

According to reports, sealed packages of MacBook Pros are being delivered to retailers along with instructions to open them on February 24, Steve Jobs’ 56th birthday. Apple Insider cites sources familiar with the matter that say Apple is gearing up to announce new MacBook Pros this week and has warned retailers that opening the boxes earlier than Thursday could result in the store losing their licence to sell Apple products.

Apple’s annual shareholder meeting is scheduled for February 23, Wednesday, so it’s possible Apple will announce the laptops tomorrow, and start selling them on Thursday.  Not much is known about Apple’s plans for the MBP but rumors of Light Peak integration have been rife over the last week. We’ll have to wait and see.

New Samsung DRAM Boasts of 12.8GB/s Transfers

Samsung’s been pretty busy with its successful Galaxy line of smartphones and tablets, along with the Nexus S, but the company this morning reminded us all that it’s not been resting on its laurels when it comes to hardware.
Zoom
Samsung today revealed that it’s developed a 1GB DRAM for mobile devices that boasts a wide I/O interface and low power consumption to boot. The new mobile DRAM is capable of transmitting data at 12.8GB per second, an eightfold increase in bandwidth when compared to mobile DDR DRAM, and it’s made possible by the use of 512 pins for data input and output compared to the last-gen mobile DRAMs’ 32 pins. All this comes with a reduction in power consumption amounting to roughly 87 percent.

"Following the development of 4Gb LPDDR2 DRAM (low-power DDR2 dynamic random access memory) last year, our new mobile DRAM solution with a wide I/O interface represents a significant contribution to the advancement of high-performance mobile products," said Byungse So, senior VP of memory product planning and application engineering at Samsung Electronics. 
"We will continue to aggressively expand our high-performance mobile memory product line to further propel the growth of the mobile industry," he continued.

Samsung’s next move is to provide 20nm-class 4Gb wide I/O mobile DRAM sometime in 2013.

Mozilla Launches Firefox 4 Bug Countdown

In anticipation of the final release of Firefox 4, Mozilla is now showcasing a public buglist countdown. 22 blocking bugs to go.
The webpage can be found at canweshipyet.com and follows a recent trend at Mozilla to track the process of projects by providing as questions phrased domains - such as areweprettyyet.com or arewefastyet.com. Of course, the page suggests a certain self-irony as Mozilla has been trying for months to trim the number of bugs, which keep surfacing and prevent Firefox 4 from reaching the finish line. Yesterday, for example, the countdown was at 14 (blocking) bugs, today the counter is a 22.
According to today's meeting minutes, there were 20 blockers left (two new bugs have been found since then). Of those 20, 5 bugs did not have patches yet. Noteworthy recent fixes include memory saving patches, which were noticed by beta users in Firefox 4 Beta 11 and caused a wave of complaints. According to Mozilla, Firefox 4 Beta is now used by about 2.3 million users.
It is unclear whether Mozilla will be able to keep its most recent Firefox 4 release plans, which put Firefox 4 RC in a February time frame. The final release is said to be published sometime in March. On that note, IE9 is scheduled for a March 14 release. Right now it appears that Microsoft will have the advantage of release its new browser first. March should be an interesting month for the web browser.

VOTW: Playable Angry Birds Birthday Cake

Sure, you may have fashioned a real-life Angry Birds set out of Play-Doh and watched, content, as your children fired those cantankerous little birdies across the room, but anything made out of cake is instantly better than Play-Doh.


Mike Cooper from Electric Pig apparently has bit of a knack for making birthday cakes that would make your Nana weep with jealousy. For his son’s sixth birthday, he decided a playable version of Angry Birds was in order. Mike says it took him 10 hours to make and that his son managed to wreck it in approximately two minutes. Then they ate it. Well, we assume they did. They at least blew out candles.

Check out the process and gameplay below.

If you want to make your own Angry Birds cake, Electric Pig has posted a step-by-step guide for making the cake. Adorable 6-year-old sold separately.

Playable Angry Birds Birthday Cake

Friday, February 18, 2011

Motorola Atrix Now Launching Next Week

The Motorola Atrix stole our hearts when it was first seen at CES in January and now it looks like the little scamp of a superphone is about to surprise us once again: Moto is apparently moving the launch of the device forward by one week.
Ten days ago AT&T fully detailed the Motorola Atrix and said it would begin taking preorders February 13 for a March 6 release. However, it seems the folks at Motorola have been going over their calendar and have seen fit to make some changes. The following notification is supposedly being sent to AT&T customer service reps and indicates that the Atrix will actually be launching on February 22 -- just five days from now.

Obama Meeting With Jobs, Zuckerberg, Schmidt

Getting techie with the President.
Zoom
Barack Obama is the most tech savvy president that the U.S. has had yet. Tonight he'll be hosting a dinner event meeting with tech leaders in California.
According to ABC News, Obama will be meeting with Google CEO Eric Schmidt, Facebook CEO Mark Zuckerberg, and Apple CEO Steve Jobs. The latter participant draws some interest after tabloid reports of his deteriorating health.
This won't be the first time that Jobs and Obama have met, though, as the two met last year for around 45 minutes, supposedly discussing technology, education and job creation.
“The president and the business leaders will discuss our shared goal of promoting American innovation, and discuss his commitment to new investments in research and development, education and clean energy,” a White House official said.
This would follow along Obama's State of the Union plans that he outlined in his speech last month.

Motorola's Xoom tablet priced at $799

We had a pretty good idea that it was going to be pricey, and now we know it’s going to be pricey.
Zoom
The pricing for Motorola’s Xoom tablet has officially been announced, and while it’s not quite the $1,199 we thought it could be, it is going to cost you quite a pretty penny. Speaking to Reuters, Motorola Mobility CEO Sanjay Jha confirmed that the 10.1-inch tablet would sell for an unsubsidized $799 at Verizon Wireless, while the WiFi-only version of the device will be priced at a more competitive $600.

Around the time of the super bowl a Best Buy ad priced the device at $800, but the price was not confirmed by Motorola. Earlier this week, a place-marker for a $1,199 Xoom set eager hearts racing. However, the new pricing, particularly the WiFi-only pricing, seems a lot easier to swallow. While 3G is nice feature to have, it comes with the added hassle of having to sign a one or two-year commitment and that’s not something a lot of people are willing to take on. That $200 price difference, along with the fact that WiFi access and hotspots are becoming so widespread, will make it pretty easy for people to talk themselves down from the 3G model.

A quick refresher course for those with memories that don’t serve as well as they used to: The Motorola Xoom packs Nvidia’s dual-core Tegra 2 chipset, a 10.1-inch widescreen 1280x800 HD display, HDMI out, 1080p video playback, two cameras (2-megapixels up front and 5-megapixels + LED Flash in the back), a built-in gyroscope, barometer, compass, accelerometer and adaptive lighting for new types of applications.

Worth $600? Worth $800? Let us know!

Intel May Have Leaked New MacBook Pro Design

If seeing Dell’s leaked smartphone and tablet roadmap for the next year or so doesn’t do it for you, there’s another leak floating around that might grab your interest. Fact is, Intel may have accidentally leaked the new MacBook Pro designs, just like it did last year.
Zoom

Here in this Intel ad is an unnamed, MacBook-a-like that the Apple blogs are suggesting could be the new MacBook Pro redesign. It shows a slimmer, sleeker design that’s closer in size to the MacBook Air than the MacBook Pro, if we’re being honest. Still, it’s possible Apple has decided to take some MBA design cues and incorporate them into the new MBPs. Fast Company points to that light patch on the right-hand side, which it's saying is an IR sensor, and claims the placement is "classic Apple design."

That said, there’s also nothing to indicate that this ad is even showing a MacBook Pro. It could be another laptop entirely, or even just a render from Intel. The thing that’s got everyone whipped into a tizz is the fact that Intel did precisely this last time the MBP was about to get a refresh: The company accidentally revealed the new model as part of a contest for Intel Spain. The prize was a MacBook Pro packing a new Intel Core i5 processor. Unfortunately, Intel ran this campaign before Apple had yet to announce the MacBook itself.

What do you think? Could Intel make the same mistake two years in a row? Or is this a very, very, very long shot?

Sony Announces 17- and 25-inch OLED Monitors

OLED monitors aren’t exactly on every desk in every office just yet, not even close, but for those who like staying ahead of the curve, Sony has just revealed two OLED panels for us to drool over.
Available in 17- and 25-inch flavors, the displays represent new additions to the company’s Trimaster EL line and will be dubbed the BVM-E series. Anyone who’s got a phone with an OLED display will be familiar with the rich colors, deep blacks and power-efficiency offered by OLED technology, however, Sony claims the BVM-E series are the first monitors to deliver full HD resolution OLED panels with RGB 10-bit drivers. You’ve got the usual 3G/HD/SD-SDI, HDMI and a DisplayPort, along with a panel resolution of 1920x1080. Standard luminance is 100 cd/m2.
Zoom

“These new monitors are the next step in professional displays, providing end users with extremely high picture quality,” said Gary Mandle, senior product manager at Sony Electronics’ Professional Solutions of America group. “This is breakthrough technology for applications where visual performance and accuracy are paramount, offering an unbeatable combination of image reproduction, color accuracy, reliability and stability.”

The BVM-E250 is set for availability sometime in the middle of April, while the 17-inch BVM-E170 will be out in June. Of course, pricing is a bit scary (they are OLED panels, after all), and unless you’re in the fortunate position of being able to convince your boss you actually really, really do need one, it’s going to cost you ¥1.3 million ($15,710) for the 17-inch BVM-E170 or ¥2.4 million ($28,910) for the 25-inch BVM-E250.

U.S. Shuts Down 84,000 Websites By Mistake

The U.S. government accidentally shut down over 80,000 websites while in the process of shutting down domains related to child porn and counterfeit goods.
Last week, the U.S. Department of Justice and Homeland Securities Office set out to seize domains believed to be involved in child pornography and counterfeit goods. Unfortunately, the operation didn’t exactly go off without a hitch as 87,000 domain owners were faced with this message on Friday:


TorrentFreak reports that one of the targeted domains actually belonged to a DNS provider, which was why so many innocent people were faced with ‘the banner’ on Friday.
Torrent Freak:

“As with previous seizures, ICE convinced a District Court judge to sign a seizure warrant, and then contacted the domain registries to point the domains in question to a server that hosts the warning message. However, somewhere in this process a mistake was made and as a result the domain of a large DNS service provider was seized.”

Things were finally fixed by Sunday, but having such a warning appear on your website for any length of time can be harmful to one’s credibility. According to TF, one affected user had to post the following message on his site in an attempt to curb the bad impression a distribution of child pornography warning carries:
“You can rest assured that I have not and would never be found to be trafficking in such distasteful and horrific content. A little sleuthing shows that the whole of the mooo.com TLD is impacted. At first, the legitimacy of the alerts seems to be questionable — after all, what reputable agency would display their warning in a fancily formatted image referenced by the underlying HTML? I wouldn’t expect to see that.”

For its part, the DHS has apparently neglected to acknowledge the blunder.

Wednesday, February 16, 2011

New Smartphones at MWC 2011

Sony Ericsson Play

Picture 1 of 13
   

Sony Ericsson Play

This is the much anticipated PlayStation phone: it runs Android 2.3 Gingerbread, and sports a slide-out game controller. It will be found on Verizon this spring but no pricing info is yet available. Sony says 50 games will be available by the launch date--with the potential to play back at 60 frames per second. The phone sports a 1GHz Snapdragon processor, 5MP camera, 4-inch 854x480 multitouch screen, and will last for 5 and a half hours, according to...
This is the much anticipated PlayStation phone: it runs Android 2.3 Gingerbread, and sports a slide-out game controller. It will be found on Verizon this spring but no pricing info is yet available. Sony says 50 games will be available by the launch date--with the potential to play back at 60 frames per second. The phone sports a 1GHz Snapdragon processor, 5MP camera, 4-inch 854x480 multitouch screen, and will last for 5 and a half hours, according to Sony.
See more

Three 1000 W 80 PLUS Gold-Certified Power Supplies Tested

We received a trio of 1000 W power supplies priced between $200 and $300, so we ran them through our usual suite of tests to see if they really live up to their 80 PLUS Gold certifications. Surprisingly, all three hiccuped during efficiency testing.
It's easy enough to assume that 80 PLUS Gold-certified PSUs with power ratings in excess of 1000 W are not built with the broad masses in mind. It takes a serious configuration to require such high power delivery ceilings. Nevertheless, the findings in this roundup make it pretty clear how much importance the manufacturers attach to the quality of their products. Even the slightest deficiency is exposed immediately at such high loads, and the negative effects on energy efficiency are often quite severe. We received three PSUs for this piece, and we put them all through our gauntlet.

Zoom
Compared to our recent roundup of gaming PSUs where the manufacturers almost buried us in test samples, the range of products is much more manageable in the high-end space. We're looking at two 1000 W PSUs from OCZ and Rosewill, along with a 1250 W PSU from Sparkle. Can the Sparkle PSU exploit its significant power rating advantage in any way? And how will these 80 PLUS Gold perform at the low loads typical of an idle PC?
Also Tested: Standby Power Consumption, EuP Standard
The European Union’s Eco-design Directive 2009/125/EC, also known as the EuP (short for Energy-using Product), contains increased stringency regarding the standby mode power consumption of PSUs from the year 2010. As more and more manufacturers are advertising the EuP certification on their PSUs, we are introducing the appropriate test methods in our reviews. Unlike previous standby measurements made with the 5 V-sb rail active, there are no loads on the rails in the EuP tests. The PSU must manage a standby power consumption of less than 1 W

MALIBAL's Lotus P150HM: GeForce GTX 485M Gets Its Game On

Using the latest advances from Intel and Nvidia, MALIBAL attempts to prove that portability and performance are no longer mutually exclusive. Can a fully-loaded Lotus P150HM meet the needs of performance enthusiasts and gamers at a more reasonable price?
As the mobile vendors who specialize in DTRs try to prove their battery-equipped workstations can replace performance-oriented desktops, many appear to have forgotten that most people like to carry their portable devices farther than the distance from their office to their car. Due to the extra cooling and energy needs of high-performance components, any effort to reduce weight has come at a huge cost in capability.
That is, until now.

The recent launch of Intel’s Core i7-2920XM CPU, which brings massive efficiency gains to the table, puts desktop-class performance into mid-sized notebooks, addressing exactly one half of a “high-end” notebook’s typical shortcomings. The other half of the middleweight performance problem is graphics, to which Nvidia thinks it has an answer in its GeForce GTX 485M.

The combination of these latest components looks good on a spec sheet, but we had to find out how well these worked in actual games and applications. MALIBAL was ready to help us find out, sending its mid-sized Lotus P150HM for evaluation.
MALIBAL Lotus P150HM Configuration
PlatformIntel FCPGA988, HM65 Express, MXM-III Discrete Graphics
CPUIntel Core i7-2920XM Quad-Core 2.5-3.5 GHz, 8 MB L3 Cache, 32nm, 55 W
RAM16 GB (4 x 4 GB) Apacer DDR3-1333 MT/s SO-DIMM, CL9, 1.5 V, Non-ECC
GraphicsSingle Nvidia GeForce GTX 485M, 2 GB GDDR5
575 MHz GPU, GDDR5-3000, 256-bit
Display15.6" "Full HD" Glossy, LED backlit TFT, 1920x1080
Webcam2.0 Megapixel
AudioIntegrated HD Audio
SecurityFinger Print Scanner
Storage
Hard DriveIntel second-gen X25-M 120 GB, MLC, 2.5-Inch, SATA 3 Gb/s SSD
Optical DrivePanasonic UJ240 Blu-ray Burner: 6x BD-R, 2x BD-RE, 8x DVD±R
Media Drive9-in-1 flash media interface
Networking
Wireless LANIntel Ultimate-N 6300, IEEE 802.11a/b/g/n, 11/54/450 Mb/s
Wireless PANOptional Internal Bluetooth Module (not included)
Gigabit NetworkIntegrated 10/100/1000 Mb/s Ethernet
IEEE-1394None
TelephonyNone
Peripheral Interfaces
USB3 x USB 2.0, 1 x USB 3.0
Expansion CardInternal Only
HDDNone
AudioHeadphone, Microphone, Line-In, Digital Out
Video1 x VGA, 1 x HDMI
Power & Weight
AC Adapter180 W Power Brick, 100-240 V AC to 19 V DC
Battery14.8 V 5200 mAh (77 Wh) Single
WeightNotebook 7.0 lbs, AC Adapter 1.8 lbs, Total 8.8 pounds
Software
Operating SystemMicrosoft Windows 7 Home Premium 64-bit Edition, OEM
Service
Warranty3-year labor, 1-year parts (Add $149 for 3-Year Full)
Price$3,307

The configuration we received adds over $2000 over the base model, including the $895 CPU upgrade from the standard Core i7-2630QM and a $495 upgrade from the GeForce GTX 460M. While many of these upgrade rates seem at least somewhat reasonable, substituting Intel’s 120 GB X25-M for the base model costs $5 more than purchasing the drive separately from Newegg. Though reinstalling the OS would have been a nuisance, there’s little chance any of us would pay someone $5 to take the standard 320 GB drive off our hands.

What Do High-End Graphics Cards Cost In Terms Of Electricity?

Many reviews analyze the minimum and maximum power consumption of a given graphics card. But just how much power does a high-end graphics card really need during the course of standard operation? This long-term test sheds some light on that question.
We wanted to take a less conventional approach to a question that comes up in just about every graphics card review, and actually measure the power that gets consumed between turning your computer on (let’s say for a gaming and email session) and up until it is turned off again.

Our feeling was that the usual extrapolations and estimates using minimum and maximum power readings don’t do justice to everyday operation. Therefore, we decided to measure the actual power consumption over a certain period of time and with different usage models, because most people do not just turn on their computers and play games without ever doing something else.

Defining that "something else" is actually rather important. As we measure and monitor power consumption during a longer testing period, we also add frequently-used programs and services to check whether or not these would increase power consumption compared to true idle operation. In addition to games in windowed and full-screen applications, other hardware-accelerated tasks include video playback and D2D/D3D content in windowed mode.
Obviously, we are mostly interested in finding out the true, total power consumption in real life, rather than the peak load or idle values. This brings us to the core of today’s examination: it is no secret that powerful graphics cards are expensive, but do they really use that much more power? Will a gaming PC with a powerful 3D graphics board really bring your electricity bill up over time? If so, by how much?
We set out to do some testing, and it turned out to be a lot like mixed fuel consumption and mileage testing on a car. The difference is that we're representing our results in watt-hours instead of miles per gallon or kilometers per liter.

Monday, February 14, 2011

3D TV: 2D Content Converted To 3D

You can expect the first few waves of 3D TV to be labeled as 3D-ready just like when high definition TVs were new on the market and they were labeled as HD-ready.
A TV is defined as 3D-ready if it has a 3D processor and emitter to communicate with the active shutter glasses. The TV must be compatible with multiple 3D standards, including half/full HD resolution and Blu-ray 3D specifications.
There are also some technical specifications that need to be met. While there is no minimum requirement or restriction on the size of the TV it does need to have a minimum refresh rate of 120hz. Each eye needs to be viewing an image with a refresh rate of at least 60hz, which is where the total of 120hz comes from.
Just like with the TV's size, a higher number is always better! The higher the refresh rate the smoother the 3D effect is going to be. A TV with a 240hz refresh rate is essentially better than one with 120hz, for example.
What's This About Refresh Rate?
This refers to how frequently the image on the screen is updated or refreshed. The image becomes increasingly smoother as the rate gets higher since it is getting updated faster. It is measured in Hertz (Hz). A 60hz rate means that the screen is updated every one sixtieth (1/60) of a second or 16.7 milliseconds.
As far as 3D TV is concerned, the refresh rate needs to be at least double what would be acceptable on a standard 2D television. This is because there are two images being displayed so the rate is being halved. While 60hz would be acceptable for a 2D television you would need at least 120hz in a 3D TV in order for the image to be as smooth. But like I said before, while 120hz looks good you will get even better results using a TV with a 240hz refresh rate.
Types Of Content It Can Display
A 3D ready TV will be able to display most types of 3D content. There has been no industry standard specifications set as far as 3D television broadcasts go. It's hard to tell if your TV will be able to view all 3D channels now and in the future until a standard is set. The more modern your TV is the more likely it will be able to view the latest 3D content on cable and satellite TV
There has been an industry standard set for 3D Blu-ray specifications though, so if your TV is 3D ready it's safe to say it will be able to display all 3D Blu-ray discs.
We hope this clears up the mystery behind what 3D Ready means! Please continue browsing the site for more 3D TV related information.
Matt Southern is a Public Relations grad and current Communications B.A. student.
He is passionate about technology and also runs a blog about social media.

How 3D TV Technology Works

3D television technology is becoming increasingly popular with each passing day. Due to the rise of popular 3D feature films (namely Pixar's Up and James Cameron's Avatar), major television manufacturers began developing three dimensional home television technology in 2009.
There are several methods that these manufacturers use to create 3D images on an LCD television; some are more expensive than others, and some are more feasible than others. This article will discuss the three primary methods of 3-D home theater technology that will be used in upcoming years.
Lenticular viewing: This technology has been pioneered by Philips, and is available as of today. TV sets that use this technology can be watched without those funny glasses that audiences used in theaters. These televisions use a lens that can send different images to each eye. That is, your left eye will see a completely different image from your right eye, which will emulate your two eyes' use of stereopsis (the process by which your eyes discern depth). The one weakness of lenticular viewing, however, is that a viewer must sit in a very specific spot in front of the TV. This means that only a couple people would be able to comfortably watch the TV at once due to its small viewing angle.
Passive glass systems: Hyundai is developing this type of LCD monitor which will allow both 2D and 3D images to be viewed. To watch the 3-D images, viewers will need to wear the traditional glasses in order to watch three dinemsional media. This technology is nothing new: the TV has two overlapping images and the glasses have polarized lenses. Each lens is polarized so that it can see only one of the two overlapping images. This technology is very feasible and 40 to 50 inch LCD TVs with this technology are currently available for purchase.
Active glass systems: This system is very similar to the passive glass system, except rather than the TV doing all work, the glasses do. The glasses synchronize with the refresh rate of the TV, then they alternate the polarization of each lens, making the wearers of the glasses see 3-D images. With this technology, people could be watching a 2-D movie comfortably, then at will switch the movie into 3-D. This type of monitor is being developed by Samsung and Mitsubishi, but the downside is that the glasses could be very expensive. Some predict the glasses to be upwards of $100.

3-D television expected to come to homes in 2011

Three-dimensional images are expected jump out of movie theaters and into living rooms by next year.
Sony and Panasonic say they will release home 3-D television systems in 2010; Mitsubishi and JVC are reported to be working on similar products.
"TV finally becomes real" in three dimensions, said Robert Perry, an executive vice president at Panasonic. "You're in it. It's the next frontier."
Perry compared the 3-D transition to the switch from black-and-white to color television and the shift from standard- to high-definition images.
Advertisement
ESPN is test-recording some sporting events in 3-D, using cameras with two sets of lenses, which would make football players appear to jump out of home television screens during live 3-D broadcasts.
And, although television makers haven't released specifics, the price of 3-D TV -- which requires a new television, broadcasting content and 3-D glasses -- is not expected to be substantially higher than some high-definition televisions on the market now.
Still, there are skeptics who say that 3-D is not ready for prime-time home viewing.
There are concerns that 3-D broadcasts, which require twice the data, will gobble up an unworkable amount of television bandwidth. And some worry that 3-D glasses and graphics won't make a smooth transition to American living rooms.
Shane Sturgeon, publisher of HDTV Magazine, said some of the glasses give him a headache and will block some people from buying the new technology.
"From what I've seen from most of the manufacturers, it's just not there yet," he said of 3-D TV technology. "I think right now, the technology -- whether you're talking about the refresh rate or the strobing or the glasses -- there are too many things right now that get in the way of enjoyment of the film for it to kick off."
All 3-D technology relies on the idea that if separate images are presented to the left and right eyes, the human brain will combine them and create the illusion of a third dimension.
TV makers go about this in different ways, though.
Panasonic and Sony, which demonstrated their products for CNN at a recent tech expo in Atlanta, Georgia, use "active glasses" and TVs with high refresh rates to achieve the effect.

3D TV: Is the World Really Ready to Upgrade?

3d-tv
3D Televisions are everywhere in 2010, but we doubt the TV viewing world’s willingness to quickly take the plunge.
Also check out 3D TV: What You Need to Start Watching in 3D.
Call us practical, jaded or simply a good, old-fashioned stick in the mud, but when it comes to consumers upgrading to 3D television anytime soon, we just don’t see the point. Much ado has been made about this new technology at CES 2010 by manufacturers such as LG, Sony, Samsung, Toshiba and Panasonic, with one in four consumers surveyed by the CEA saying they plan to buy a 3D TV within the next three years. However, while ESPN plans to roll the first official 3D sports network on June 11, and consumers are predicted to spend $17 billion on 3D TVs in 2018, per research firm DisplaySearch’s forecasts, we’re just not sold on the concept’s potential rapid consumer uptake.
Why? Among other issues:

Lack of Current Demand

Let’s try a simple exercise: Prior to the debut of these announcements, name one person (save perhaps the odd rabid fanboy or futurist) you know of who recently said, “Boy these shows are great – I sure wish they could make it look like Oprah was in my living room, however.” It wasn’t even until Avatar put the concept of 3D on most consumers’ map that there was any real mainstream excitement surrounding the category. Similarly, it’s one thing to experience 3D technology while sitting in front of a three-story screen versus one’s living room, where it’s more of an event, and your everyday living room, where the activity becomes more mundane, making it hard to justify the cost of an immediate upgrade. Besides, since when was 2D storytelling and filmmaking broken to begin with?

kheops-glassesPracticality

It’s bad enough having to hunt for the remote in your couch cushions. Now imagine having to do the same for 3D glasses that not only make you look goofy once located, but could also prove quite uncomfortable to wear in long-term sittings. Is this really the glorious future sci-fi novels once promised? Maybe, if you’re into migraine headaches, occasional screen flicker and, well, you know, looking a complete toolbox. Somehow it just doesn’t seem worth the trouble to watch Monsters vs. Aliens ooze forth out of your screen.

3D TV Pricing

Though manufacturers are aiming to keep costs just slightly above high-end LED/LCD models, keep in mind that this would still put them at a fair premium above other sets. This will slow overall adoption rates, and be hard to swallow for countless consumers who’ve just purchased a new set within the last 12-18 months. To get true 3D content, you’ll also need access to 3D broadcast programming and/or a 3D Blu-ray player and 3D movies, plus 3D glasses, which won’t come cheap. While some models, such as Toshiba’s Cell TV, promise 2D to 3D upscaling, which converts traditional images into three-dimensional ones, that technology is expected to cost a pretty penny. Coupled with current economic conditions, it’s sure to keep the sets out of most consumers’ comfortable buying range, which may lead to smaller prospective audiences and content providers being unwilling to quickly produce compatible premium content as a result. And fewer must-see programs means less titles that can help push more 3D TVs into the market.
Given that the consumer electronics industry is coming out of a rough year or so, we understand why there’s been so much buzz – both the media and business insiders need a noteworthy innovation to rally behind. However, it’s going to take time until we really see compelling reasons for everyday shoppers to take the plunge (e.g. killer apps, 24-hour programming, ergonomic interfaces that make it simple and pleasant to watch 3D programs, etc.). As such, we can’t help but feel that current expectations for the rapid rise to prominence of this curious new television category are overly aggressive.
Will there be an eventual market for 3D HDTV technology? Undoubtedly. However, we expect it to take longer to reach the point of true mainstream saturation, transitioning over a period of time (the same as we did from black and white sets to color). And, for that matter, predict that the category needs to evolve considerably before it becomes the retail juggernaut and technological revolution that television manufacturers hope.

Chum Kiu

Chum Kiu is the second of three open-hand forms of Wing Chun Kung Fu.It builds upon many of the basic principles and techniques learned in the first Wing Chun open-hand form, Siu Lim Tao[1]. The form may also be called Chum Kil[2].

Contents

[hide]

[edit] History

Chum Kiu is a traditional open-hand form. It dates back to the Shaolin temple and the development of Wing Chun over two hundred years ago.

[edit] Technical aspects

Chum Kiu consists of a variety of techniques and movements designed to bridge the gap to an opponent, hence the name, Bridge Seeking Form[3]. Chum Kiu also builds upon arm and leg movements learnt in Siu Nim Tao to create a coherent fighting system[4]. This system is further expanded in the Biu Tze and Mook Yun Jong forms. Chum Kiu also teaches circular footwork, complex hand shapes and body turns [2].

[edit] Other aspects

Chum Kiu practice develops advanced stances and footwork[2], develops techniques designed to control an opponent[3] and includes some simultaneous attack and defence techniques[4]. It is a far more dynamic form than Siu Nim Tao, and places significant emphasis on techniques slightly outside the centreline[2].

[edit] Alternative versions of the form

Although many of the movements are similar, Chum Kiu varies significantly between schools. Some notable practitioners are viewable via the links to You Tube below. Many more variations also exist.
Yip Man
Chu Shong Tin
Sifu Gross

echnical Aspects of CT Angiography

Introductiontop of page

A firm understanding of fundamental principles underlying CT angiography including spiral CT acquisition, image processing, and image display are required in order to get consistently excellent results over a wide range of clinical applications. One of the most compelling advantages of CT angiography is the ability to provide all of the information which previously required two or more radiological studies which may, in the case of conventional angiography, be much more expensive than CT. Image processing techniques such as volume rendering enhance this ability, allowing the radiologist and clinician to interactively explore different aspects of the dataset to address many specific questions which impact patient management.
In this review we describe the fundamental concepts underlying data acquisition, image processing, and image display. We focus on a practical approach to optimization in each of these key areas which we have found provides fast, reliable, and accurate results in a busy radiology practice.
Spiral CT Acquisition
A fundamental concept of 3-D imaging for any application is that the quality and accuracy of the resulting images is ultimately limited by the quality and resolution of the dataset. The rapid developments in spiral CT technology over the past 6 years have resulted in scanning capabilities for volume data acquisition that provide unparalleled opportunities for nothing short of reinventing CT applications and protocols. Today's spiral scanners have come a long way from the earliest prototypes which acquired initially a 12 sec and soon thereafter 24 sec study. While the latest "state of the art" capabilities are continually being upgraded, the typical top of the line spiral scanner can acquire data with subsecond (.7-.75 sec) gantry rotation, supports a spiral length of 40-60 sec, and can acquire a second back-to-back spiral with only a 5 sec interscan delay. Scan parameters such as kVp and mAs are identical to those used with state of the art non-spiral protocols, typically operating in the range of 300 mAs. Data reconstruction times vary from 1-5 sec per slice, and can be performed as a background function without interfering with the scanner's ability to scan other patients. The newest scanners acquire slices faster and allow longer total imaging times, thereby providing the large volumes of very high resolution data which are ideal for 3-D imaging.
A single breathhold for each phase of acquisition (i.e., arterial and venous) provides optimal results. With proper coaching, we have found that over 95% of patients can perform a 30-40 second breathhold. Shallow breathing works well in patients who are obviously not able to perform an extended breathhold.

Contrast Injection Techniques
Optimal CT angiographic images requires the rapid intravenous injection of contrast material via a power injector at 3-4 cc/sec. Higher rates up to 5-7 cc/sec have been used in the literature but are not generally necessary in our experience. Clave et al1 found in phantom studies that a luminal attenuation of 150 HU gives optimal results for measuring of carotid stenosis - this level of enhancement is easily achieved with an injection rate of 3-4 cc/sec. We use nonionic contrast injected via a 20 or 22 gauge angiocatheter in the antecubital vein.
Proper timing from the start of the injection to the start of scanning is essential to ensure imaging at the time of peak intravascular enhancement. Although automated techniques such as SmartPrep (GE Medical Systems) or C.A.R.E. Bolus (Siemens Medical Systems) are available to measure the contrast circulation time in individual patients, we have found that the use of a empiric delay of 25-30 seconds for arterial imaging in the chest and abdomen and a delay of 60-70 seconds for venous imaging is faster and yields excellent results in most patients. In older patients or patients with evidence of decreased cardiac function we typically increase the arterial and venous phase delays by approximately 10 seconds. The volume of contrast used should be adequate to maintain maximal vascular opacification throughout the spiral acquisition - 120-150 cc of contrast is adequate for most applications.
Because oral contrast may obscure intravascular contrast and necessitate more extensive editing of the dataset, we use water is used instead of positive constrast agents for abdominal applications of CT angiography. While this may at first seem to be a liability in terms of evaluating the bowel, we have found that in practice water is very effective oral contrast agent which may be better than positive oral contrast agents.
Vessel Orientation
The orientation of the vessel of interest has an important impact on the accuracy and appearance of CT angiography. Because spiral CT can currently only reconstruct axial images, resolution in the axial plane is always higher than in the z direction (perpendicular to the axial plane). Wise et al2 have shown in phantom studies that artifical lumen eccentricity can be a significant problem for vessels which are not oriented perpendicular to the axial plane. This phenomenon is due to the anisotropic nature of the spiral CT data. Likewise, we have shown in phantom studies3 that CT angiography with volume rendering is significantly more accurate in assessing percent stenosis for vessels oriented perpendicular to the axial plane than for vessels with more in-plane orientations. The carotid arteries are particularly well suited to accurate measurement of luminal diameter due to their orientation, whereas renal arteries may be more difficult to accurately assess due to their more in-plane orientation.
Collimation and Table Speed
As a general rule, it is usually preferable to increase the pitch up to 2 rather than the collimation in order to achieve adequate coverage of the volume of interest. Polacin et al4 have shown that a pitch of 2 increases the effective slice thickness minimally compared to a pitch of 1 when 180 degree linear interpolation is employed. However, a pitch of 2 doubles the distance covered in the z direction. This favorable tradeoff should be exploited in most CT angiography applications to allow use of the narrowest collimation possible.
Image Reconstruction Interval
One of the principal advantages of spiral CT over conventional CT is the ability to reconstruct images at any interval required. Overlapping reconstructions have been shown to improve the quality of 3-D images5,6, and are routinely used for CT angiographic applications. We have found that an overlap of 50% is adequate for most applications, although we routinely reconstruct spiral CT datasets at 1 mm intervals for highest resolution imaging of small vessels such as the renal arteries. Images can be reconstructed at any chosen interval from 1 to 10 mm, with the primary limiting factors being the time necessary to reconstruct the dataset (from less than 1 sec to 10 sec per reconstructed image) and the size of the resulting dataset, which may often be larger than 100 megabytes in size.
While reconstructing at 1 mm intervals does provide optimal accuracy for 3-D CT angiography, it also presents number of practical problems. Computer technology is rapidly making issues of reconstruction time and storage capacity less significant. The problem of how to meticulously review hundreds of axial images in a timely fashion is also significant. We typically film only every 3rd image, but review the entire dataset at the computer workstation. Cine mode, multiplanar reconstructions, and interactive real-time volume rendering can all be valuable for reviewing large datasets quickly and completely. Many current computer workstations require that the entire dataset be constructed at the same interval; this fact makes the use of narrow reconstruction intervals only through the area of interest impractical if 3-D reconstructions are to be performed. It is best to store the raw data until acceptable 3-D reconstructions are performed, thereby allowing later reconstructions at different intervals or using different reconstruction algorithms if necessary. Once the original data is deleted, retrospective changes in the reconstruction technique are not possible.
Subsecond Spiral Scanning
Subsecond spiral scanning times increase the distance which can be covered with a single spiral scan and further reduces motion artifacts without a significant reduction in image quality7,8, and therefore should be used for all CT angiography if available.

Data Editingtop of page

Editing of the spiral CT dataset is commonly used to remove other high attenuation structures such as bone which may interfere with visualization of the intravascular contrast. The need for data editing varies with the specific clinical application and the 3-D rendering technique used. While editing is absolutely essential with maximum intensity projection, other rendering techniques such as surface rendering and volume rendering may require little or no editing for effective visualization in many applications. Nevertheless, editing is useful and in some cases absolutely essential regardless of the rendering technique, and is an important and challenging area of ongoing research.
Manual Editing
Manual editing typically involves the user drawing a region of interest around the structures to be included in or excluded from the 3-D image. This is typically performed on the axial source images and may be accelerated by the use of slab techniques9 which allow contiguous slices to be grouped together and edited as a unit. Manual editing is time consuming (typically 30-60 minutes per case), but it is flexible and can be effectively applied to virtually all clinical applications. It is also widely available on virtually all commercially available 3-D software packages. Manual editing of the axial source images is not generally required when using real-time volume rendering with clip planes.
Automated Editing
Segmentation is the division of an image into multiple areas or objects (called primitives) with distinct features, such as individual organs or tumor. Humans perform this task using a complex analysis of size, shape, intensity, location, texture, and proximity to surrounding structures. Performing this task automatically using a computer has proven to be an extremely difficult problem which continues to be an important area of ongoing research. There is continues to be no general computer segmentation algorithm that can be applied to all medical images or all regions of the body. Consequently, despite significant advances in image processing techniques, time consuming manual editing of the CT dataset by an expert is still often required for optimal visualization.
Automated editing applications are available for specific domains such as the lung10, abdomen9, and liver11. These applications use a variety of computer techniques to identify wanted and unwanted structures. In general, all such applications will fail in some circumstances. Therefore, an expert user must monitor the automated editing and there must be a means for the user to correct errors. We have shown that an automated bone editing algorithm can significantly reduce editing time compared to manual editing for abdominal applications with limited input9. The interactivity provided by real time rendering greatly facilitates image editing by allowing very fast and effective manual editing with simple editing tools.
 
Cut-Planes
Real-time volume rendering with clip plane editing provides a flexible means of interactively editing the actual 3-D image12. In this technique, user prescribed clip-planes which can be positioned at any orientation or depth within the 3-D volume are used to remove unwanted data, enabling the user to better visualize structures within the volume which would otherwise be obscured by overlying tissues. Multiple cut planes can be used as needed to allow optimal visualization from multiple orientations. This simple technique is flexible and fast - a diagnostically useful image can be created in literally seconds. The clip planes can also be used to pan through the data, creating images which combine the features of 3-D images and multiplanar reconstructions.

3-D Rendering Techniquestop of page

The large volume of data generated by modern spiral scanners challenge traditional methods for viewing radiological studies. Where a conventional CT study might have provided 4 sheets of images (12 images per sheet) which could easily be reviewed by a radiologist sitting in front of a light box, today's spiral scanners can generate hundreds of images which require many sheets of film to display. This problem has fueled the development of computer graphics workstations which allow the radiologist and clinician to interactively explore spiral CT datasets using a variety of display formats including standard axial slices, reconstructed slices in any plane, or high quality 3-D images. Three-dimensional images integrate large volumes of data into a form which may be easier to interpret and similar to other familiar studies as catheter angiograms.
Most clinical studies of 3-D imaging to date have used surface rendering or maximum intensity projection (MIP) techniques for generating three-dimensional (3-D) images from CT datasets13-17. While some studies have shown that these 3-D techniques can be useful in clinical applications, several investigators have found standard axial and/or multiplanar images to be more accurate than the MIP or surfaced rendered 3-D images for a variety of CT angiography applications, including the carotid arteries15, the renal arteries16, and the aortoiliac system17. Such mixed results highlight the inherent limitations of these 3-D rendering techniques. In order to speed image processing, both surface rendering and MIP ignore most of the available CT data and use very simple schemes to distinguish vessels from other tissues18. These compromises limit accuracy and are therefore less attractive with each successive generation of computing power. Nevertheless, surface rendering and MIP are widely available techniques and are clinically useful. We will discuss these rendering techniques briefly before providing a more detailed discussion of volume rendering which provides a more flexible and accurate solution for 3-D visualization of CT angiographic data.
Surface Rendering
Surface Rendering was one of the earliest methods for 3-D display, and is available in most commercially available 3-D medical imaging packages. In this method, each voxel within the data set is determined to be a part of or not a part of the object of interest, usually by comparing the voxel intensity to some threshold value, thereby defining the "surface" of the object. With the surface determined, the remainder of the data is discarded. Surface contours are typically modeled as a collection of polygons and displayed with surface shading. The resulting image is a simplified representation of a structure which may be very inaccurate, particularly if the surface is difficult to determine precisely as is often the case in medical imaging. By converting the data from a volume to a surface, a large portion of the data available is forfeited in exchange for faster, easier computation. While this can be an advantage by allowing real-time rendering and thereby enhancing user interactivity, the usefulness of surface rendered medical images is generally limited by their artifacts and poor accuracy.
 
Maximum Intensity Projection
Like surface rendering, MIP is also commonly available in commercial 3-D software packages and so has been extensively clinically evaluated, particularly with respect to its usefulness in creating angiographic images from CT and MRI data. The MIP algorithm evaluates each voxel along a line from the viewer's eye through the image and selects the maximum voxel value as the value of the corresponding display pixel. The resulting images are typically not displayed with surface shading or other devices to help the user appreciated the "depth" of the rendering, making three-dimensional relationships difficult to assess. If there is another high intensity material along the ray through a vessel (such as calcification) the displayed pixel intensity will only represent the calcification and will contain no information from the intravascular contrast. Selection of the highest pixel value also increases the background mean of the image, particularly in enhancing structures such as the kidney and liver, thereby decreasing the visibility of vessels in these structures. Volume averaging coupled with the MIP algorithm commonly leads to MIP artifacts: a string of beads appearance in MIP images of normal vessels passing obliquely through a volume. While MIP has a number of important artifacts and shortcomings, it has been studied extensively and usually does provide superior accuracy to surface rendering for CT angiography14.
Volume Rendering
Volume rendering19-31 is a more advanced and computer intensive 3-D rendering algorithm that can incorporate all of the relevant data into the resulting 3-D image and overcomes many of the problems seen with surface rendering and MIP. Volume rendering is well suited to a wide range of medical and nonmedical visualization tasks due to its flexibility: data can be displayed with varying levels of opacity, surface shading, and perspective depending on the demands of each specific task. Continuing advances in computer power have transformed volume rendering from what was once a somewhat cumbersome technique requiring computer resources which were not widely available into a technique which can now be done at real-time frame rates (5-10 frames per second) using relatively inexpensive workstations.
Volume rendering was originally conceived at LucasFilms in San Rafael, California. The computer graphics group there was created by Ed Catamull, Ph.D., and Alvey Ray Smith, Ph.D., who were recruited by George Lucas to develop new computer graphics techniques to create more realistic images for the movies. Early examples of their work included the special effects in the "Star Wars" and "Star Trek" movies. They developed their own parallel processing computer, the Pixar image computer, and used its speed as the basis for further advances. Volume rendering was developed by three team members: Robert Drebin, Pat Hanrahan, and Loren Carpenter21. Volume rendering was unique in that it was applicable to a wide range applications including seismic data display, wind tunnel testing, and medical imaging.
As the name implies, volume rendering renders the entire volume of data rather than just surfaces or maximum intensity voxels, and so potentially conveys more information than a surface model. Volume rendering techniques sum the contributions of each voxel along a line from the viewer's eye through the data set. Because the information from the entire data set is incorporated into the resulting image, much more powerful computers are necessary to do volume rendering at a reasonable speed. We view volume techniques as the most advanced form of 3-D rendering currently available for creating accurate, clinically useful medical images. Volume rendering is just now being incorporated into commercially available software packages - with general availability and continued increases in computer power, it will likely become the most important rendering technique for 3-D medical imaging.

Volume Rendering: Implementationstop of page

Ray-Tracing
The original volume rendering algorithm described by Drebin, Carpenter, and Hanrahan used ray tracing to construct the 3-D image21. A number of specific implementations have been developed based on ray tracing. The ray-tracing approach sequentially computes the values for each displayed pixel in the 3-D image by calculating a weighted sum of all voxels encountered along a line (or ray) projected from the chosen viewing perspective through the data volume. This process is repeated by projecting a new parallel ray through the data for each displayed pixel. In order to create a 512 x 512 3-D image, this technique requires 262,144 sequential ray calculations! Additional calculations are required to incorporate surface shading into the image. While this approach can be slow, it can be implemented on very basic computer platforms, including personal computers without specialized graphics hardware.
Schreiner et al32 have shown that different variations of the MIP algorithm can result in very different images. Similarly, although specific implementations of the volume rendering algorithm by various manufacturers share important fundamental features, differences in the interpolation algorithms and other features may produce very different results both in terms of image appearance and accuracy.
Hardware Accelerated Techniques
Specialized computer graphics hardware is now commercially available which allows all of the pixel values in a 3-D image to be computed in parallel rather than the serial approach typically used by ray tracing programs. This approach affords dramatic improvements in rendering speed - volume rendered 3-D images can be rendered at real-time rates (5-20 frames/sec). Real-time rendering allows true user interactivity with the dataset, making possible such complex applications as simulations of minimally invasive and surgical procedures.
The real-time volume rendering application which was used to create the 3-D images in this paper is a modified version of the "Volren" real-time volume rendering program. Volren is a texture mapping based rendering package which was developed by a team led by Brian Cabral at Silicon Graphics27. The system uses trilinear interpolation hardware to extract a parallel set of oblique slices from a three-dimensional dataset and uses rasterization and compositing hardware to combine the slices in a way that models the passage of light through a partially transparent, emissive, and possibly reflective medium. This approach is much faster than traditional ray-tracing methods, but is currently more limited in terms of surface shading options.
 

Volume Rendering: Parameters top of page

Window width and level
Volume rendering typically segments data based on voxel attenuation. We use window width and level controls which are similar to those used for conventional axial display of CT images. While the window can be adjusted to standard settings used to display soft tissue, liver, bone, or lung, the real-time rendering system permits the user to interactively alter the window setting and instantly see the changes reflected in the displayed 3-D image. This interactivity allows the user to rapidly customize the display to specific cases with varying levels of contrast enhancement and to rapidly explore a variety of attenuation ranges.
The transfer function used with volume rendering segments the data based on voxel attenuation but, unlike thresholding, it accurately models the physical reality that many voxels are only fractionally composed of intravascular contrast (or other materials). A standardized approach to selecting this transfer function is needed in order to ensure accurate, reproducible results for such applications as measuring vascular stenoses. Different rendering parameters can alter the apparent diameter of the normal vessel and the stenotic segment. In a recent phantom study of CT angiography with volume rendering3, we demonstrated the accuracy of the following approach for selecting the transfer functions: We assumed that voxels with an attenuation equal to or greater than the nominal attenuation of the intravascular contrast were composed of 100% contrast. Those with an attenuation less than or equal to the wall of the phantom vessel were considered to contain 0% intravascular contrast. Voxels with values between those of the wall and the intravascular contrast were considered to be only partially filled with contrast and assigned a percent intravascular contrast between 0% and 100%. The measurements of % stenosis had a mean error of 2% for vessels oriented perpendicular to the axial plane such as the carotids, suggesting that this is a valid approach for choosing the segmentation transfer function.
In clinical practice, a similar effect would be achieved by measuring the attenuation of the intravascular contrast and the adjacent soft tissue, which would then serve as the top and bottom points of the transfer function ramp respectively. The presence of mural calcification would require a modification of the transfer function used in this study, with a second ramp with downward slope at higher levels to separate intravascular contrast from calcium.
Opacity
Opacity refers to the degree which structures close to the user obscure structures which are further away. Opacity can be varied from 0% to 100%. Higher opacity values produce an appearance similar to surface rendering which helps to clearly display complex 3-D relationships. Lower opacity values allow the user to "see through" structures, and can be very useful for such applications as seeing a free-floating thrombus within the lumen of a vein or evaluating bony abnormalities such as tumors which are located below the cortical surface.
While these properties of opacity are intuitive, varying the opacity also has a second, less intuitive but very important effect on the image: it changes the apparent size of objects. Higher opacity values make objects appear larger, while lower opacity values make them appear smaller. This property has important implications applications which rely on measurements, including measuring % stenosis from CT angiography data. In our recent phantom study of CT angiography with volume rendering3, we found that an opacity of 50% gave the most accurate measurements of vessel diameter. Other opacity values may show specific features of the data to better advantage, but may also give inaccurate measurements. Further investigation is needed is this area to better characterize the interaction of opacity and other display parameters and their effect on the accuracy of the resulting image.
Brightness
The brightness can be varied from 0 to 100%. Brightness affects the appearance of the image, but does not affect accuracy - unlike opacity, it does not alter the apparent diameter of rendered structures. Brightness settings are largely subjective based on the preferences of the individual user. We have found that a setting of 100% works well for nearly all applications.
 
Accuracy
Improved accuracy is the primary reason for using volume rendering rather than a simpler technique such as surface rendering or MIP. Volume rendering has been shown to be superior to surface rendering for such musculoskeletal applications as the detection of fracture gaps20. Preliminary phantom studies in our laboratory have shown volume rendering to be very accurate for quantifying vascular stenoses3. Similarly, in a recent study of patients with suspected renal artery stenosis, Johnson et al30 reported that volume rendering was extremely accurate in identifying stenoses of 50% or greater. Our phantom studies have also shown the potential for significant interobserver variability with this technique31. This variability stems largely from the tremendous flexibility of the volume rendering technique. We are currently developing display strategies to ensure consistent results between readers.

Display top of page

Conventional Computer Display
The simpler display techniques which can be used to convey a 3-D effect with conventional computer monitors and hard copies include depth shading, obscuration, and lighting33. Depth shading simply involves making more "distant" structures appear darker than those "closer" to the observer. While such an effect is easily achieved even with modest computer resources, the utility of this technique by itself is very limited. Obscuration is another relatively simple display technique whereby structures close to the observer obscure the view of more distant structures. This property is closely related to the opacity level chosen by the user. There is a tradeoff between the depth perception afforded by obscuration by a relatively opaque structure and the common medical necessity of seeing through transparent superficial structures to appreciate deeper ones. Lighting models vary widely from very simple ones based on the orientation of a surface relative to a single, fixed light source to much more complex models that account for multiple sources and the light reflected off of other structures within the rendered object. These complex models require much more computer power with often marginal improvements in viewer understanding compared to simple lighting models. Depth shading, obscuration, and some type of lighting are commonly used in combination in most 3-D rendering packages.
Kinetic depth effect refers to the depth cues than can be provided by rotating an object. This commonly used tool can be done with modest computer hardware by precalculating images from multiple angles around a single axis and displaying the images in rapid succession to provide a cine loop animation. Interactivity allows the users to control or alter the image to suite his or her needs. A simple example is user control of direction and speed of image rotation in a cine loop display. Real time volume rendering provides a higher level of interactivity which can significantly enhance the 3-D ques by allowing the user to alter the perspective and display parameters in real time. Real time rendering has the significant advantage allowing viewing form any angle, without the constraints imposed by precalculated video loop display.
Stereoscopic Viewing
Stereoscopic display techniques convey perspective and depth cues by providing slightly different images to the left and right eyes. This effect can be achieved by slightly altering the perspective of alternating images and using shutter devices incorporated into viewing eye wear which open and close to alternate frames between the left and right eyes. This technology is relatively inexpensive and can provide a dramatic 3-D effect which can be helpful in understanding complex anatomy. Head motion parallax allows the viewer to see an object from different angles as his or her head moves with respect to the display. When combined with stereoscopic viewing and real-time rendering speeds, this technique can provide a very realistic portrayal of 3-D relationships. While stereoscopic displays are not yet in routine clinical use, preliminary experiments in our laboratory show that both radiologists and nonradiologists prefer the stereoscopic display to conventional displays. We routinely view 3-D medical images using both conventional and stereoscopic displays.

Conclusion top of page

Careful attention to technical details in all phases of CT angiography including data acquisition, image processing, and image display is essential in order to consistently produce optimal vascular studies. A basic understanding of each of these steps helps the radiologist to tailor the examination to specific clinical problems and avoid potential pitfalls. With optimal technique, CT angiography can provide very accurate images which obviate the need for conventional angiography in many circumstances. Continuing advances in scanner and image processing technology promise to further enhance both the accuracy and the practicality of CT angiography.
 

References top of page

1. Claves JL, Wise SW, Hopper KD, et al: Evaluation of contrast densities in the diagnosis of carotid stenosis by CT angiography. AJR 169:569-573, 1997
2. Wise SW, Hopper KD, Ten Have T, Schwartz T: Measuring carotid artery stenosis using CT angiography: the dilemma of artificial lumen eccentricity. AJR 170:919-923, 1998
3. Kuszyk BS, Heath DG, Johnson PT, Fishman EK. CT angiography with volume rendering: in vitro optimization and evaluation of accuracy in quantifying stenoses. AJR 168(3):79, 1997
4. Polacin A, Kalender WA, Marchal G: Evaluation of section sensitivity profiles and image noise in spiral CT. Radiology 185:29-35, 1992
5. Ney DR, Fishman EK, Kawashima A, Robertson DD, Scott WW. Comparison of helical and serial CT with regard to three-dimensional imaging of musculoskeletal anatomy. Radiology 1992; 185:865-869.
6. Brink JA, Lim JT, Wang G, Heiken JP, Deyoe DA, Vannier MW: Technical optimization of spiral CT for depiction of renal artery stenosis: in vitro analysis. Radiology 194:157-163, 1995
7. Herts BR, Baker ME, Davros WJ, et al. Helical CT of the abdomen: comparison of image quality between scan times of .75 and 1 sec per revolution. AJR 167:58-60, 1996
8. Fishman EK. High-resolution three-dimensional imaging from subsecond helical CT datasets: applications in vascular imaging. AJR 169:441-443, 1997
9. Fishman EK, Liang CC, Kuszyk BS, Davi SE, Heath DG, Hentschel D, Duffy SV, Gupta A: Automated bone editing algorithm for CT angiography: preliminary results. AJR 166:669-672, 1996
10. Ney DR, Kuhlman JE, Hruban RH, Ren H, Hutchins GM, Fishman EK: Three-dimensional CT - volumetric reconstruction and display of the bronchial tree. Invest Radiol 25:736-742, 1990
11. Gao L, Heath DG, Kuszyk BS, Fishman EK: Automatic liver segmentation technique for three-dimensional visualization of CT data. Radiology 201:359-364, 1996
12. Johnson PT, Heath DG, Kuszyk BS, Fishman EK: CT angiography with volume rendering: advantages and applications in splanchnic vascular imaging. Radiology 200:564-568, 1996
13. Rubin GD, Dake MD, Napel SA, McDonnell CH, Jeffrey RB Jr: Three-dimensional spiral CT angiography of the abdomen: initial clinical experience. Radiology 186:147-152, 1993
14. Rubin GD, Dake MD, Napel S, et al: Spiral CT of renal artery stenosis: comparison of three-dimensional rendering techniques. Radiology 190:181-189, 1994
15. Cumming MJ, Morrow IM: Carotid artery stenosis: a prospective comparison of CT angiography and conventional angiography. AJR 163:517-523, 1994
16. Galanski M, Prokop M, Chavan A, Schaefer CM, Jandeleit K, Nischelsky JE: Renal artery stenoses: spiral CT angiography. Radiology 189:185-193, 1993
17. Rieker O, Düber C, Neufang A, Pitton M, Schwenden F, Thelen M. CT angiography versus intraarterial digital subtraction angiography for assessment of aortoiliac occlusive disease. AJR 1997; 169:1133-1138.
18. Heath DG, Soyer P, Kuszyk BS, et al: Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques. RadioGraphics 15:1001-1011, 1995
19. Fishman EK, Drebin RA, Magid D, et al: Volumetric rendering techniques: applications for three-dimensional imaging of the hip. Radiology 161:56-61, 1987
20. Fishman EK, Drebin RA, Hruban RH, et al: Three-dimensional reconstruction of the human body. AJR 18:53-59, 1988
21. Drebin RA, Carpenter L, Hanrahan P. Volume rendering. Comput Graphics 1988; 22:65-74.
22. Drebin RA, Magid D, Robertson DD, Fishman EK: Fidelity of three-dimensional CT
imaging for detecting fracture gaps. J Comput Assist Tomogr 13:487-489, 1989;
23. Ney DR, Drebin RA, Fishman EK, Magid D: Volumetric rendering of computed tomographic data: principles and techniques. IEEE Comput Graphics Appl 10:24-32, 1990
24. Fishman EK, Magid D, Ney DR, et al: Three-dimensional imaging: state of the art. Radiology 181:321-337, 1991
25. Woodhouse CE, Ney DR, Sitzmann JV, Fishman EK: Spiral computed tomography arterial portography with three-dimensional volumetric rendering for oncologic surgery planning. Invest Radiol 29:1031-1037, 1994
26. Kuszyk BS, Heath DG, Ney DR, et al: CT angiography with volume rendering: imaging findings. AJR 165:445-448, 1995
27. Cabral B, Cam N, and Foran J: Accelerated Volume Rendering and Tomographic Reconstruction Using Texture Mapping Hardware. In: ACM/IEEE Symposium on Volume Visualization. 1994. Washington, DC.
28. Rubin GD, Beaulieu CF, Argiro V, et al. Perspective volume rendering of CT and MR images: applications for endoscopic imaging. Radiology 199:321-330, 1996
29. Kuszyk BS, Heath DG, Bliss DF, Fishman EK: Skeletal 3-D CT: advantages of volume rendering over surface rendering. Skeletal Radiol 25:207-214, 1996
30. Johnson PT, Halpern EJ, Kuszyk BS, et al: CT angiography of renal artery stenosis: comparison of real-time volume rendering algorithm with a maximum intensity projection algorithm. Radiology 205(P):295, 1997
31. Ebert DS, Heath DG, Kuszyk BS, et al. Evaluating the potential and problems of three-dimensional computed tomography measurements of arterial stenosis. J Digital Imag 11(3):1-8, 1998 (in press)
32. Schreiner S, Paschal CB, Galloway RL: Comparison of projection algorithms used for the construction of maximum intensity images. J Comput Assist Tomogr 20:56-67, 1996
33. Kuszyk BS, Ney DR, Fishman EK: The current state of the art in 3D oncologic imaging: an overview. Int J Radiation Oncology Biol Phys 33:1029-1039, 1995
34. Wise SW, Hopper KD, Schwartz TA, Ten Have TR: Technical factors of CT angiography studied with a carotid artery phantom. AJNR 18:401-408, 1997
35. Dix JE, Evans AJ, Kallmes DF, Sobel AH, Phillips CD: Accuracy and precision of CT angiography in a model of carotid artery bifurcation stenosis. AJNR 18:409-415, 1997