Thursday, September 23, 2010

Convert Windows XP into a Windows 7 Virtual Machine with Disk2vhd

Would you like to be able to still run Microsoft Windows XP while you get familiar with Windows 7? Well, moving your existing Windows XP system to a virtual machine that you can run in Windows 7 is a relatively easy procedure with the Disk2vhd tool from Microsoft’s Windows Sysinternals team: Mark Russinovich and Bryce Cogswell.

In this edition of the Windows Desktop Report, I’ll show you how to use Disk2vhd, which is a free tool, to move your Windows XP installation into Windows 7 and then run it with Windows Virtual PC.

What is Disk2vhd?

As its name implies, Disk2vhd is designed to create VHD versions of physical disks. This tool can be used to convert systems running Windows XP SP2 and up as well as Windows Server 2003 and up.

To perform this task, the Disk2vhd utility makes use of the Windows Volume Snapshot feature built in to the operating system. When you run Disk2vhd, it first creates a volume snapshot image of the hard disk. It then exports that image into a VHD that you can then add to Windows Virtual PC as well as to Hyper-V Manager.

If you’ll be using Windows Virtual PC, keep in mind that it supports a maximum virtual disk size of 127GB. If you create a VHD from a larger disk it will not be accessible from a Windows Virtual PC virtual machine. Another thing to keep in mind is that Windows Virtual PC doesn’t support the Multiprocessor Specification, and it will not be able to boot VHDs captured from multiprocessor systems.
Preparation

In order to ensure a successful virtual machine transition, there are several tasks that you’ll want to perform on your Windows XP system in preparation for the operation. Let’s take a closer look.

* Backup: You’ll want to back up your system using Windows XP’s Backup Utility or a third-party disk imaging tool, such as EASEUS Todo Backup, which is a free package that I used for my test configuration. That way if anything goes awry, you can restore your Windows XP system and get right back to work. Just to be on the safe side, you may also want to back up all your data on CD/DVD or on an external hard disk. While it may sound like overkill, having an extra backup of your data will give you peace of mind.
* Optimization: You’ll want to make sure that your Windows XP system and hard disk are in tip-top shape by running Disk Cleanup and Disk Defragmenter. Doing so will help make the operation run quickly and smoothly. By running Disk Cleanup, all unnecessary files will be removed, such as trash in the Recycle Bin and Temporary Internet files. By running Disk Defragmenter, your hard disk will be ready for optimal performance.
* Windows Update: You’ll want to run Windows Update on your Windows XP system and make absolutely sure that all current updates are downloaded and installed.

My example configuration

In my example, I’ll be using two different computers: one computer running Windows XP SP3 and one computer running Windows 7. I’ll run Disk2vhd on the XP system and create the VHD on an external hard disk. The drive on this XP system is using about 40GB on an 80GB hard disk. I’ll then move the virtual machine over to Windows 7 and run it there using Windows Virtual PC.

Getting Disk2vhd

You can get and use Disk2vhd in one of two ways. You can download Disk2vhd from the Windows Sysinternals page on the Microsoft TechNet site. Or you can run immediately Disk2vhd from the Live.Sysinternals.com site. Either way, the utility does not require installation, which means that using it is as easy as launching the executable.

Converting the system

I decided to run Disk2vhd from the Live.Sysinternals.com site for this article. Once I clicked on the link, I immediately saw the Internet Explorer File Download - Security Warning dialog box, shown in Figure A.


Figure A

As soon as the download commences, Internet Explorer displays the File Download - Security Warning dialog box.

When you click the Run button, the download begins, as shown in Figure B.

Figure B

The actual download is really quick.

Once the download is complete, the executable begins to launch, which triggers the Internet Explorer - Security Warning, shown in Figure C.


Figure C

As soon as the executable begins to launch, Internet Explorer displays a Security Warning dialog box.

You’ll then need to accept the Sysinternals Software License Terms, as shown in Figure D.

Figure D

You are encouraged to read through the Software License Terms.

When you get to the main user interface, you’ll see that Disk2vhd automatically selects all the available volumes/drives, sets a default location, and chooses a name for the VHD file. You can choose a different location and name if you want. In fact, the procedure will go faster if you select a different hard disk besides the one on which you are creating a VHD. Then, select the Prepare for Use in Virtual PC check box. As you can see in Figure E, I chose only the main drive and selected an external hard disk. When you are ready, click the Create button.


Figure E

You can choose specific drives as well as a different location and name.

You’ll then have to sit back and wait. Depending on how large your hard disk is and the location of your destination folder, the process can take a while to complete. As the process chugs away, you’ll see the progress, as shown in Figure F.

Figure F

The progress gauge keeps you apprised of the conversion procedure.

For instance, my XP system with about 40GB on an 80GB drive took just a little over two hours to complete the conversion and transfer to an external USB hard disk. When the operation is complete, you’ll see the screen shown in Figure G.


Figure G

My example system took a little over two hours to convert to a virtual machine.

Running XP in Windows Virtual PC

Once you copy the virtual Windows XP system’s .vhd file over to the Windows 7 system, you’ll then open Windows Virtual PC and launch the Create a Virtual Machine wizard, as shown in Figure H.

Figure H


You’ll launch the Create Virtual Machine wizard.

In order to reduce the amount of initial tweaking, I configured my virtual machine to have the same amount of RAM as its physical counterpart, as shown in Figure I.

Figure I

You’ll specify the amount of RAM that you want your Windows XP virtual machine to have.

When prompted to add a virtual hard disk, select the Use an Existing Virtual Hard Disk option and then use the Browse button to locate your .vhd file, as shown in Figure J. When you’ve done so, just click the Create button.


Figure J

Select the Use an Existing Virtual Hard Disk option and then locate your vhd.

Once the Create a Virtual Machine wizard is complete, then just reopen Windows Virtual PC, locate the new virtual machine, and launch it. Of course, I had to do a bit of tweaking in order to get my XP system running, and because XP was now running on a new system, I had to reactivate Windows XP. However all that was relatively painless, and I was able to run my existing Windows XP installation in Window 7, as shown in Figure K.

Figure K

Windows XP is running inside of Windows Virtual PC in Windows 7.

What’s your take?

Will you use this Disk2vhd to virtualize your Windows XP system? Have you used Disk2vhd before? If so, what has been your experience? As always, if you have comments or information to share about this topic, please take a moment to drop by hear from you.

Interview between TechRepublic & Jeff Mullen: Debit/credit card fraud: Can smart payment cards prevent it?

Is an intelligent and interactive payment system the answer to debit/credit card fraud? Dynamics, Inc. thinks so. Find out what they are up to.

—————————————————————————————–

Current payment-card technology in the United States is low-hanging fruit for criminals. Why? Other countries, of interest to the bad guys are using Chip and PIN systems. Not necessarily the best answer, but more secure than the current magnetic-stripe approach used in the United States.

So why isn’t the U.S. converting to Chip and PIN? The cost to replace 60 million magnetic-stripe readers might have something to do with it.

Recently, a company surfaced with a alternative solution. Dynamics, Inc. on September 14 gave a presentation at Demo Fall 2010 (scroll down to the Dynamics, Inc. video) demonstrating how to increase payment-card security and still be economically feasible.


Credit cards on steroids

In the video, founder and CEO Jeff Mullens describes what amounts to a credit card with a built-in computer. Amazingly, it looks like a normal credit card, except for the LEDs and readout.

That means there is some out-there technology going on and I had to learn about it. So, I contacted Jeff Mullen, and he kindly provided the following insight into his company and inventions:

TechRepublic: Dynamics, Inc. was started by you in 2007. Could you give a brief overview of the company?

Jeff Mullen: Dynamics, Inc. is focused on engineering next-generation payment solutions. In the U.S. this takes the form of complementing the current magnetic stripe reader acceptance infrastructure. We do this to solve many problems.

One problem we solve, is giving consumers the power of choice at the point-of-sale. Consumers will be able to select options on their cards and have these options communicated to their card issuer via the existing infrastructure. We call this heightened social interaction between a cardholder and their issuer as the Payments 2.0 application space.


TechRepublic: Could you describe Card 2.0 and Electronic Stripe, the two technologies you incorporate in your cards?

Jeff Mullen: The Card 2.0 platform is a complete computer architecture that has a processor and a number of sub-circuits for various functions (e.g., power management and control).

The Electronic Stripe technology is the world’s first fully-programmable magnetic stripe, meaning the Card 2.0 platform has control of what is written to the magnetic stripe.

TechRepublic: Can you give us an idea as to what it takes to fit 70 components and a battery into a piece of plastic that is less than a millimeter thick.


Jeff Mullen: Approximately three years of work from a team of extremely dedicated, disciplined, and focused engineers; a number of confidential partners, and millions of dollars in capital. (If you watch the Demo Fall 2010 video linked above, you will see how the components are physically arranged.)

TechRepublic: There are two types of payment cards, MultiAccount and Hidden. Could you describe what each payment card offers?

Jeff Mullen: Sure, here are the descriptions we used in the Dynamics Inc. press release:

MultiAccount: The device includes two buttons on the face of a card. Next to each button is a printed account number and a light source. The user can select an account by pressing one of the buttons. The card visually indicates the selection by turning ON the light source associated with the selected account.

Then the information associated with the selected account is written to the Electronic Stripe. The card can then be swiped at any current magnetic-stripe reader. The slide below is one example:

Hidden: The device includes five buttons on the face of a card and a thin flexible display. The display hides a portion of a cardholder’s payment card number. To turn the device ON, a user must enter a personal unlocking code into the card. If the user enters in the correct unlocking code, the card will then visually display the user’s payment card number so that the user can read the number for online transactions.


The Electronic Stripe is then populated with the correct magnetic information so that the card can also be used with magnetic stripe readers. After a period of time, the display turns OFF and the Electronic Stripe erases itself - thus removing all critical payment information from the surface of the card. If the card is lost or stolen, the card is essentially useless. The slide below shows the Hidden card, note the series of buttons used to input the card owner’s personal code:

One thing that I would like to reiterate. Mr. Mullen pointed out that both cards are capable of working on the current payment-card scanners. Something that no other solution has been able to accomplish.

TechRepublic: Do you see Card 2.0 technology solving problems not related to security?

Jeff Mullen: We solve a number of core-payment problems not related to security. I can’t disclose more right now. But, as more information is released, I think everyone will start to realize the power of the Payments 2.0 application space.

TechRepublic: Besides payment cards, do you see any other uses for your card technology?

Jeff Mullen: Card 2.0 technology is a platform. Dynamics, Inc. will continue to introduce valuable new technologies to card issuers and card holders. That said, there are several other markets in which the platform can provide significant value, for example; security cards, medical cards, and identification cards.


Final thoughts

As a security type, I see lots of potential for computerized-payment cards. It could easily become a multi-factor authentication device, verifying a relationship between the card and the card’s owner as well as a relationship between the card and the financial institution.

What I’d like to see is a MultiAccount/Hidden combination card, gaining both increased security and convenience. I would like to thank Jeff Mullen for explaining the technology behind Card 2.0 and Electronic Stripe and Melinda Jenkins of Edelman.com for her assistance.

Geek Gifts 2010: Halo: Reach

The Halo franchise added another title to the series this week with Bungie’s release of Halo: Reach. I was counting the days for this release because the Halo series are some of my favorite games, and Master Chief is my all-time favorite video game character.

Halo: Reach is the story of Noble Team, a group of six genetically engineered Spartan class soldiers, the same kind of soldier as Master Chief, the protagonist of Halo 1, Halo 2, and Halo 3. The previous installments of the Halo series took up the fight between humanity and a religious alliance of alien species called the Covenant. This chapter is a prequel to the series, and its events take place as humanity learns that it is at war with an alien menace that is capable of wiping out the human race.

Specifications


New features

  • Player rewards system, which includes Credits for purchasing armor permutations
  • Armory for purchasing armor and even voices of favorite characters
  • Daily and weekly challenge system
  • Armor abilities, which include jet packs, camouflage, and holograms
  • Queue joining so that you can connect with your friends more easily
  • Improved voting system lets you enjoy the game the way you want to enjoy it
  • Improvements
  • Campaign matchmaking so that you can enjoy story mode with a team anytime
  • Revamped Firefight mode, introduced in ODST, now includes matchmaking component
  • Forge World is like forge on massive amounts of steroids
  • Even more game customization options than Halo 3

What I like

Matchmaking

The Active Roster and queue joining are welcome additions to the Halo multiplayer experience. With Halo 3, you had to check on your friends over and over, watching the time count down on their game. With Halo: Reach, you can join a queue that will put you in their lobby as soon as their game ends; this means no more spamming your friends with invites, hoping they will remember to invite you when they finish.


The connections options have been revamped, and there are new social settings that allow you to specify the kind of experience you like. You can define your level of chattiness, competitiveness, teamwork, and tone so that you end up with the same kinds of player that you are.

The improved voting system has been tweaked since the Halo: Reach beta. A menu is displayed before the game starts, giving the player the ability to vote for one of three gametype/map combinations or none of the above. This makes it much easier to get a gametype that the majority of the room will find enjoyable.

Forge/Custom games

My favorite thing about Halo: Reach is the ability to create games that suit you. The new version of Bungie’s sandbox editor is impressive in its scope. Forge World, a map five times larger than any you have seen in a Halo game, provides incredible variety in the types of locations on which to build. From an empty room perfect for Grifball to a large canyon with caves, side paths, and cliffs, there is an environment to suit nearly any one’s creative vision.

The number of objects that can be placed on a customized map has been increased dramatically and even includes two and three story buildings as single objects that can be placed. Moreover, the settings for custom games can be changed with even more granularity than in Halo 3, which was already the most customizable FPS on the market.


Firefight

For anyone who never played Halo: ODST, in the firefight game mode, wave after wave of Covenant forces are hurled at your firing squad. This mode provided a number of memorable moments for me in the past, and it is sure to do the same in the future. The entire system has been revamped for the Spartan program, put into matchmaking, and promises to be a wild ride.

Campaign

Overall, the Campaign was mostly what I expect from a Halo game. It was a challenging FPS experience with very few of the repetitive battles that marked Halo: CE and Halo 2. The enemies were definitely ramped up and more difficult to kill in Halo: Reach than in any previous Halo title, leading me to believe that Noble Team took out the majority of the veteran Covenant forces before the events of Halo: CE took place.


The armor abilities added a refreshing dimension to the Halo gameplay experience; it was nice to be able to sprint up to an enemy with their back turned to get an assassination, complete with special video sequences. The jet pack is very useful for getting to elevated positions without looking for stairs; armor lock is great when you are on the verge of dying; and the hologram makes for an extremely effective diversion.

The AI for opposing forces is really good, especially on the more difficult levels. On Legendary (the most difficult setting), forces split and go to opposite sides, flanking the player at every opportunity and removing flanking as a viable tactic throughout most of the campaign. At the beginning of the game, Noble One, the team leader, tells you that the “Lone Wolf” stuff won’t fly with Noble Team, which is great advice — stay with the team so they can support your efforts at killing all of the enemies on the battlefield.

General improvements

The Armory and rewards systems should increase the replay value of Halo: Reach, which was already high because of the multiplayer component. Many people go nuts for character customization options and, in Halo: Reach, character customizations even go through to campaign mode cinematics.

The Daily and Weekly Challenges should keep a higher number of people engaged for a longer time with Halo: Reach.


What I don’t like

Despite the fact that it was a direct part of the storyline, I felt like the space fighting sequence was out of place and a little boring. Halo is supposed to keep your blood pumping the entire time you aren’t watching a cinematic sequence, and I found the dogfighting sequence to be a bit slow. This is probably because I have an affinity and appreciation for good flight simulators (this is because my father is a pilot who wrote simulators for fighters and even the space station), and this one is good but not great. It’s not surprising since flight sims are outside of Bungie’s wheelhouse, but that is the biggest quibble I have with the Campaign.

Geek bottom line

If you are a fan of the FPS genre, then Halo: Reach is definitely a must-have, especially because the video game has more replay value than any FPS I have seen. And if you like creating games based on the FPS model, you couldn’t ask for a better engine than Halo: Reach. The amount of customization available in Forge, the map editor, and Custom Game modes is incredible and will allow the Halo community to come up with some awesome creations, from map remakes to completely new gametypes. Bungie promises it will eventually include community-created game modes and maps, as it did with Grifball and maps like Utah Mambo in Halo 3.


If you are not yet a fan of the Halo series, there’s no reason to buy Halo: Reach in an edition other than Standard; the more expensive offerings only give you collectible and commemorative additions to the game. Also, if you won’t be logging on to take advantage of the incredible increase in multiplayer options (where you can even get placed on a team with which you can go through the campaign), then I recommend that you rent or GameFly this title — it will be worth the 12 or so hours it takes to go through the story (I finished with one friend in 10.5 hours on Legendary).

Geek gift guide

  • Fun factor: *****
  • Geek factor: ***
  • Value: *****
  • Overall: *****

How to install Windows Server 2008 R2 with Hyper-V and Windows 7 on the same partition

ypically, dual-booting multiple operating systems requires repartitioning a disk, which isn’t always desirable, especially if you already have a multi-boot environment with Windows and Linux. What I am proposing is booting from a VHD - a virtual hard disk that contains the entire Windows Server 2008 R2 OS installation within a single, portable file hosted by your Windows 7 file system.

What’s different about this post from the other boot-from-VHD posts out there? Admittedly, I did learn how to create and install into VHD from some of the TechNet posts, but they focus on creating VHDs from within the WinPE console. Unfortunately, most of us work in Windows, not WinPE. So, what I have attempted here is to show you how to create the VHD from Windows 7 (or Windows Server 2008 R2), so that you can create VHDs for other purposes in addition to just an OS install. Additionally, I’ll try to provide some other scenarios where you might want to consider using VHDs.
Why would you want to boot from a VHD?

There are several reasons:

* There is no requirement to repartition your hard drive, which in itself tends to waste disk space since most partitions are typically over provisioned.
* It simplifies image management for both VMs and physical systems as the same VHDs can be repurposed for both use cases.
* You can move the VHD to a Hyper-V server or port it to another virtualization platform like ESX, Virtual Box, Xen , etc., that supports VHDs.
* The VHD can be configured to be thin provisioned. This means that you can set the maximum size of the VHD and it will appear to the guest OS as a full partition, but in the host OS, it will only consume as much disk space as required to contain the entire guest OS. The VHD will grow in size up to the maximum as blocks are written to (allocate on write).
* You can remove the entire OS by simply removing a single file and updating your boot menu.
* It allows you to boot easily from an external device like an eSATA drive (USB or remote storage are not supported for Windows 7 or Windows Server 2008 R2 - Hyper-V Server is supported on USB/Flash )
* You can easily back up the entire OS as a single file (like you would a VM).
* You can have versioned OSs using a differencing disk to create a parent child relationship between VHDs. This can be very disk space friendly if you manage many images.

What you’ll need:

Note: In the examples below, I am doing everything on drive C: and assigning drive letter Q: to the VHD but you can use any drive that Windows 7 has available.

Preparing the VHD

First we’ll need to create a VHD on the Windows 7 system using the DISKPART command:

1. From the Start->All Programs->Accessories right-click the Command Prompt and select “Run as Administrator” - DISKPART will launch and you will be put into the DISKPART CLI shell.

2. Let’s have a look at what volumes DISKPART can see. Type:

list vol↵

Take note of what you see.

3. To create a minimal size VHD that can grow to a maximum size of 15000MB type:

create vdisk file=c:\win2k8r2.vhd maximum=15000 type=expandable↵

4. To set the focus of DISKPART to the newly created VHD type:

select vdisk file=c:\win2k8r2.vhd↵

5. To attach the virtual disk to the system type:

attach vdisk↵

6. We will need a primary partition within the virtual disk to make the VHD bootable; type:

create partition primary↵

7. Although the partition can be formatted as part of the Windows Server installation, I prefer to do it now. To format the partition with the NTFS file system, type:

format fs=ntfs quick label=”NewVHD”↵

8. We don’t really need to assign a drive letter to the VHD at this point since during the install of Windows Server, it will get a different drive letter anyway, but it makes it more convenient to investigate the VHD from Windows 7. Assign the drive letter Q: to the new partition by typing:

assign letter=q: ↵

9. Let’s have a look at what volumes DISKPART can see now. Type:

list vol↵

You should see the new volume available with a size of 14GB.

10. To exit the DISKPART shell type:

exit↵

11. To exit the command shell type:

exit↵

12. Use Windows Explorer to see what size the file c:\win2k8r2 that contains the VHD is. It should be around 80MB. It will grow from here as we add contents to the volume.

13. For fun use right click Computer from the Start Menu and you should see drive Q: mounted. You can check the properties of drive Q: by right clicking it.
Installing Windows Server

Now we are ready to install Windows Server 2008 R2 onto the newly formatted partition within the VHD. I’ll provide general instructions here, just highlighting the differences from a standard installation.

1. Boot from the Windows Server 2008 R2 ISO. At the screen that prompts you to select a language press SHIFT+F10 to access the WinPE console.

2. To launch the DISKPART CLI shell:

diskpart↵

3. Let’s have a look at what volumes DISKPART can see. Type:

list vol↵

4. To set the focus of DISKPART to the previously created VHD, type:

select vdisk file=c:\win2k8r2.vhd↵

5. To attach the virtual disk to the system, type:

attach vdisk↵

6. Let’s have a look at what volumes DISKPART can see. Type:

list vol↵

7. To exit the DISKPART shell, type:

exit↵

8. To exit the WinPE shell, type:

exit↵

9. Return to the Windows Server 2008 R2 setup and select Custom (advanced) as the installation type, not Upgrade.

10. When prompted for the installation location, select the newly formatted volume that has the label NewVHD.

11. Perform the remainder of the installation as usual.

12. When you reboot you will notice that you get a boot menu allowing you to select the OS of your choice. Select Windows Server 2008 R2.

13. Turn on the Hyper-V role.

Now you have a dual boot Windows 7 and Windows Server 2008 R2 system that can also run the Hyper-V role even though it is not installed in its own partition of a physical disk.

At this point, you could migrate your Windows 7 installation to a VHD so that both of your operating systems are booting from VHDs. If you choose this route, the Disk2vhd tool might prove useful.

You could also use the VHD that you just installed Windows Server into as a Hyper-V (or ESX) virtual machine (you will need to recreate or modify the BCD store first).

By the way, the size of the VHD you created will probably be around 6 GB when viewed from the Windows 7 instance.

Track stability in Windows 7 with the Reliability Monitor

Having the ability to track system stability over time is something that all Microsoft Windows users have wanted at one-time or another. Of course, Windows Performance monitor has been around for a long time but requires manual configuration and a deep understanding of all the cryptically named counters. Fortunately, Windows 7’s Reliability Monitor is a preconfigured tool that will allow you to track hardware and software problems and other changes to your computer.

Windows Vista also has a version of Reliability Monitor that works similarly to the more advanced version in Windows 7.

In this edition of the Windows Desktop Report, I’ll provide you with an overview of the Windows 7 Reliability Monitor and show you how to use it to track the behavior of your system over its lifetime.

Overview

As I mentioned in the introduction, the Reliability Monitor measures hardware and software problems and other changes to your computer. As it does so, Reliability Monitor compiles a stability index that ranges from 1 to 10 (the least stable to the most stable).

More specifically, the stability index identifies when unexpected problems or other changes reduced the reliability of your system. A graph identifies dates when problems began to occur and a report provides details that you can use to troubleshoot the cause of any reduced reliability.

Accessing the Reliability Monitor

The Reliability Monitor is a part of Windows 7’s Action Center, which can be found in the Control Panel’s System and Security category. However, the easiest way to access the Reliability Monitor is to click the Start button and type Reliability in the Start Search box.

When the Reliability Monitor launches, its graph will show you the most recent activity. To prepare for the next section of this article, click the graph and then press and hold the left arrow key to essentially rewind the Reliability Monitor’s graph all the way back to the day that you installed Windows 7. When you do, your graph will look similar to my example shown in Figure A.


Figure A

By continually clicking the arrow, you will essentially rewind the graph all the way back to the day that you installed Windows 7.

Taking a look around

As you can see in Figure A, the main feature in the Reliability Monitor is a graph called the Stability Index. On the day you installed Windows 7, your system was assigned a reliability rating of 10.00, which is the highest possible score. If you press and hold down the right arrow key, you’ll see the day-to-day ebb and flow of the Stability Index over time as various events occur. When you get to the far right, you’ll see your current rating

By default, each column in the graph represents a day, but you can change it to view by weeks, by selecting a View By option at the top of the graph.

You may also notice dotted and solid lines in the graph. Dotted lines indicate that there was not enough recorded data to calculate a steady Stability Index. This typically results from periods of time when the system is not in full use - either turned off or in a sleep state. Solid lines indicate that there was enough recorded data to calculate a steady System Stability Index.

Now, if you shift your attention to the right side of the graph, you’ll see that each of the five rows indicates Reliability Events in five categories: Application Failures, Windows Failures, Miscellaneous Failures, Warnings, and Information. As you look over these rows, you’ll see icons that represent the type of event that occurred.

As you can see in Figure B, any type of failure that occurs is marked by a red error icon and you can see the resulting drop in the graph up above the icon. As such, any day that a problem event occurs, reliability index goes down quickly. If there are no problems on the next day, the reliability index will go up slightly. If there are several days without any problems, the reliability index will continue its upward turn, albeit very slowly.


Figure B

Any type of failure that occurs is marked by a red error icon and you can see the resulting drop in the graph up above the icon.

Now, if you select any column that contains icons, you’ll see the report section of the System Stability Index and you’ll be able to see what the exact problems were. For example, clicking the column for 8/31/2010 on my example system, as shown in Figure C, reveals that there were a series of events and warnings related to the installation of old HP Scanner driver. As you can see, that problem brought the reliability index down sharply from the previous day’s high. The reliability index then very slowly climbed back up and didn’t reach the previous high until two weeks later.

Figure C

In the report section of the System Stability Index you can see the exact problem that caused a drop in the reliability index.

Getting more information and solutions

Because the Reliability monitor is a part of the Actions Center, it provides links to the Problem Reports tool as well as the Check for Solution tool at the bottom of the windows. You’ll also notice that the Action column of the report can provide you with more detailed information. For example, clicking the View technical details link in the Informational Events section of the report, brought up the Problem Details window, shown in Figure D.


Figure D

Because the Reliability monitor is a part of the Actions Center, it provides links to the Problem Reports tool.

What’s your take?

As you can see, Windows 7’s Reliability Monitor makes it easy to track your system’s stability over time and as you can imagine, can be a big help in troubleshooting problems because it will allow you to determine what the problem was and when it occurred. Have you used the Reliability Monitor to track stability or troubleshoot a problem? If so, what has been your experience?

How do I make my Windows 7 desktop look and feel like a Linux desktop?

Those of you who have worked on a Linux desktop know how much more efficient you can be. You also know that the possibility of having your Windows desktop look and feel more like a Linux desktop would be a boost to productivity, not only in efficiency, but in ease of use as well. From virtual desktops, to multiple panels, to focus switching and window shading, there are plenty of tricks to use (thanks to third-party applications) that can help you get a far more efficient Microsoft Windows 7 desktop than the one that exists by default.

But how is a Linux desktop any more efficient than the standard windows desktop?

When you use the standard Windows desktop get used to minimizing windows on a single desktop. If you have multiple windows open up on a desktop, to work on another window you click it to get that windows’ focus. To get a window out of the way you minimize it. If you have a lot of windows open, you then have to search all those minimized icons for the window you want to work on (or you cycle through all of your open windows with Meta-Tab or Alt-Tab.

The GNOME developers have done an incredible job of melding the Windows and the Mac OS X desktop together to make a very efficient desktop. But we can take that one step further by using features from all of them. The resulting desktop will have very quick access to applications, multiple workspaces, and ways to keep your desktop clutter-free that the standard desktop can’t touch.



Figure A shows the standard desktop with a number of windows open. Figure B shows that same desktop with all of the windows shaded and out of the way. A quick right-click of a title bar and you have that window back.

Figure A

A typical cluttered Windows desktop

Figure B

A much neater, and easy to manage desktop, thanks to WinRoll.

I am going to show you how to mimic a very usable, efficient desktop on your Windows 7 machine. This desktop will have the simplicity of Windows, the cool-factor of OS X, and the efficiency of Linux. This may not be to the liking of everyone, but for those of you who prefer a more flexible environment, you will appreciate what these little additions do for the standard Windows work environment.

So, hold on to your hats, we’re going to take that tired, old desktop of yours and make it fresh, and Linux-like.

Step 1: The panels (aka Taskbar)

One of the things I like about GNOME is that the desktop is divided between two panels. The top panel is the primary panel and contains the menus, shortcuts, and notification area. The bottom panel is home of the Window List, Trash, and Show Desktop. To be perfectly honest, I always get rid of the lower panel, in favor of a dock (I’ll address this in a moment). But for the time being, let’s work with the main panel.

The first thing you need to do is move that Taskbar to the top of your screen. Why? To make room for the dock you will add later. To do this right-click the taskbar, select properties, and change the positioning from the bottom to the top (Figure C).

Figure C

You can either just drag the taskbar to the top, or use this method. I prefer this method as you are less likely to bring Explorer to a screeching halt.

Once you have done that you will want to clean that baby up. I prefer to keep my launchers pinned to the Start Menu and not the Taskbar. To pin a launcher to the Start Menu locate the application in the Start Menu, right click the application icon, and select Pin to Start Menu. After you have all of your applications pinned to the start menu you can then unpin them from the Taskbar.

You will also want to add a folder shortcut to the Taskbar, like the Places menu in the GNOME main Panel. To do this, follow these steps:

  • Right click the Taskbar.
  • Select Toolbars | New Toolbar.
  • When the Explorer window opens, navigate to the folder you want to add to this toolbar (I like to use the Documents folder I the Libarary).
  • Click Select Folder to add the new toolbar.

Once the new Toolbar is added you can then change it to only show Text or Text and Title.

Step 2: Add a Dock

The next step is to add a Dock to the bottom of your screen. Windows 7 will not allow a second Taskbar so you have to use a third-party software to add a dock. The one I like is StarDock’s ObjectDock. This application is simple to install and run.

Step 3: Add a desktop Pager

One of the most efficient tools for desktop space is the Linux pager. With this tool you can effectively have more than one workspace on your computer. It’s like having dual (or tri or quad) monitors without the extra hardware.

Since Windows does not have this feature built in, you will have to add a third-party solution. One of the better solutions for this is WindowsPager. This is a fairly good copy of the Linux pager and will give you similar features and functionality. You do not really install WindowsPager, you just fire up the executable.To have the WindowsPager tool run at startup simply copy and paste the .exe file to the Startup folder by typing shell:startup in the run dialog and then copying the file there (Figure D).


Figure D

Copy the .exe files to this directory to ensure the applications start upon login.

Step 4: Window shading

One of the features I have used since the early ’90s is Window Shading. What this does is roll your window up (like a window blind) so that the entire window rolls up into the title bar. This allows you quick access to your windows as well as the ability to arrange your windows in such a way that you always know what window is what - even if the application is “out of the way”.

The best tool I have found for this is WinRoll. This is another tool that does not actually install but runs via .exe file. Do the same with WinRoll that you did with WindowsPager, by copying the .exe into the startup folder.

Step 5: Autoraise

I don’t know about you, but I hate having to click on a window to get it to raise. Since my early days of Linux I have enjoyed the focus follows mouse and auto-raise behavior. Fortunately you do not have to install a third-party software for this feature. Instead do the following:


1.               At the Start Menu search dialog enter “change how” (no quotes).

2.               From the results select Change How Your Mouse Works.

3.               In the new window select “Activate a window by hovering over it with the mouse”.

Now when you hover your mouse over a window it will automatically raise to the front gaining focus.

Figure E shows you all the visible elements of the transformation. The only aspect you cannot see is the autoraise feature.


Figure E

Visible difference

The final look

For the curious, Figure E shows a sample of what the Windows to Linux desktop can look like. Although you do not get to see it in action, it is much more like the Linux desktop now in both look and feel

Friday, September 17, 2010

Navy cyber leader expects proactive capabilities this year

 

Navy cyber leader expects proactive capabilities this year

As much as we at HNN balk at all the cyber warfare talk out there, here’s some solid thinking that applies to just about everyone.  When the the Lower Colorado River Authority finally notices people are trying to login to their site – but only because one of the IPs resolved to China…  that’s just shows you how pathetic the average “defense stance” really is.

The military is “traditionally reactive and static, but we need to be proactive, dynamic and predictive,” he said. He estimated his command will achieve what he terms a proactive defense stance by October this year, and have predictive capabilities by fiscal 2012. He did not elaborate on technological details of the new capabilities.


In order to fully integrate cyber security and military operations in cyberspace, service members working in that domain must define a baseline, or “normal,” landscape that accurately reflects when something is amiss – and when defense is needed.

“We have no idea what normal is,” McCullough said…

via Navy cyber leader expects proactive capabilities this year — Federal Computer Week.

Google Account hacking-Hack Gmail Easily And Free

Google Password Decryptor is a Password Recovery Tool which can be used to recover your lost Google account passwords using the Google software's installed in your computer. But it can used as a hacking tool as i say.

Google Account hacking

This software's works on the simple Google accounts vulnerability that all the Google web products such as Gmail, blogger, Google docs etc uses the same username and passwords and hence when you get the username and password of one Google web product such as of let us take here Gmail then you can use all the other Google products such as Ad words, Ad sense etc and hence Hacking the victim.



Google Password Decryptor is a software's which decrypt in seconds all the usernames and passwords stored in the victims computer like if he uses Gtalk it will decode the password stored in the victims computer to reveal the username and passwords…

It Supports many Google software's such as -


Google Talk
Google Picassa
Google Desktop Seach
Gmail Notifier
Google Chrome


Steps To Use Google Password Decryptor -



1. After Downloading Extract the software in a folder.

2. Then Run “GooglePasswordDecryptor.exe” in the extracted folder.

3. When the software Loads click on “Start Recovery” Button.

4. Done !!



Note : You Can save this list to HTML format or TEXT format by clicking on ‘Export to HTML’ or ‘Export to TEXT’ button.

How To Create Your Own Phisher

How To Create Your Own Phisher

Phisher page is the login page same as of the service your victim is using for example --> gmail,orkut,yahoomail,paypal,facebook,twitter etc.

It will Look Just as same as you are asked to login to your Email acount,Thats where the victim gets tricked aka HACKED.



So,Lets start.

To create Your Own Phisher you have to follow these simple steps ----




1. You have to go to the website for which you want to make your phisher for ex. gmail,yahoomail,orkut,paypal etc





2. When you are there at the login page just click on File>Save As

[ remember to rename it as index.html while saving the web page ]





3. When you have saved the web page,open the index.html in notepad.




4. Search for .gif and replace the text written before the image name with





You Have To Do that for all the images named there, Or you can use replace all option.



5.There will be another file needed also named as login.php .Which will give the condition to save the username and password typed by the user.



[NOTE:- I will not be providing the login.php to you,You have to get the login.php by yourself.


if you have some knowledge about the php language you can make your own login.php

For those who dont have knowledge about php language i recommend you search for login.php on Google,you will surely get that file.]




6. After you have done this,click on Edit>Search and type action in the search box,and then click on search.



7. It will take you to the First action String, after the equals two mark type login.php in replace of the the text written after it.



8. Click on Seach again,this time it will take you to another action string,after the equal to mark type


in replace of the text written in front of

the equals to mark.

NOTE: You have to type you your sites name in replace of your-site,and your free webhosting service against yourservice in

http://www.your-site.yourservice.com/login.php.



9. Now you are all done.




NOTE: You have to upload all the files on your free webhosting service directory,including in index_files Folder in the directory.Or it wount work.

The Directory Will be ---

i. index.html

ii. index_files [Folder which you saved]

iii. login.php

iv. login.txt




10. You can Make any websites phisher by these steps for example -- gmail,orkut,yahoomail,paypal,facebook,twitter etc.




Happy Hacking



Greetz to all...

Comments are allways welcome for all.

Yahoo! News hacked

Hacker tinkers with news articles undetected.


“ It's more difficult to get into their advertising reporting statistics than their news production tools. ”

Adrian Lamo


In a development that exposes grave risks of news manipulation in a time of crisis, a hacker demonstrated Tuesday that he could rewrite the text of Yahoo! News articles at will, apparently using nothing more than a web browser and an easily-obtained Internet address.

Yahoo! News, which learned of the hack from SecurityFocus, says it has closed the security hole that allowed 20-year-old hacker Adrian Lamo to access the portal's web-based production tools Tuesday morning, and modify an August 23rd news story about Dmitry Sklyarov, a Russian computer programmer facing federal criminal charges under the controversial Digital Millennium Copyright Act (DMCA).

Sklyarov created a computer program that cracks the copy protection scheme used by Adobe Systems' eBook software. His prosecution has come under fire by computer programmers and electronic civil libertarians who argue that the DMCA is an unconstitutional impingement on speech, and interferes with consumers' traditional right to make personal copies of books, movies and music that they've purchased.

Lamo tampered with Yahoo!'s copy of a Reuters story that described a delay in Sklyarov's court proceedings, so that the text reported, incorrectly, that the Russian was facing the death penalty.

The modified story warned sardonically that Sklyarov's work raised "the haunting specter of inner-city minorities with unrestricted access to literature, and through literature, hope."

The text went on to report that Attorney General John Ashcroft held a press conference about the case before "cheering hordes", and incorrectly quoted Ashcroft as saying, "They shall not overcome. Whoever told them that the truth shall set them free was obviously and grossly unfamiliar with federal law."

Lamo says he's had the ability to change Yahoo! News stories for three weeks, and made minor experimental changes to other stories that have since cycled off the site.

The hacker provided SecurityFocus with a screen shot showing an August 10th Reuters story about a Senate committee?s report on the National Security Agency. The screen shot shows the story on Yahoo! News with a false quote attributed to the report: ?Rebuilding the NSA is the committee?s top priority. In partnership with AOL Time Warner, we fully expect to bring you a service you can?t refuse.?

According to Lamo, the NSA story remained on the portal for three days, before being cycled off.

He says he deliberately chose an old story Tuesday so it would be seen by few readers, while still demonstrating the vulnerability.

"Yahoo! takes security across its network very seriously, and we have taken appropriate steps to restrict unauthorized access to help ensure that we maintain a secure environment," said Kourosh Karimkhany, senior producer at Yahoo! News, in a statement. The company declined further comment.

'Subversion of Information Attack'
The hack highlights a risk that's troubled security experts since 1998, when a group called "Hacking for Girlies" defaced the web site of the New York Times, replacing the front page with a ramshackle tirade that criticized a Times reporter, and defended then-imprisoned hacker Kevin Mitnick.

"There's always been a concern that somebody would gain access to a news site and make more subtle changes," says Dorothy Denning, professor of Computer Science and director of the Georgetown Institute for Information Assurance at Georgetown University.

One year ago hackers modified a news story on the California Orange County Register web site to report that Microsoft founder Bill Gates had been arrested for hacking into NASA computers.

Experts warn that malicious corruption of content at a respected news source -- sometimes called a 'subversion of information attack' -- could have serious consequences during a crisis.

In the hours following the September 11th terrorist attacks on New York and Washington, millions turned to the Internet for information. Top news sites reported as many as 15 million unique users. Yahoo! reportedly had double the traffic that it received for the entire month of August.

"You can imagine someone changing lists of people who were on the planes, or reported missing, or all kinds of things that could cause a lot of grief," says Denning. "Or posting stories attributing attacks to certain people."

Lamo agrees, and says he's troubled that he had the power to modify news stories that day.

"At that point I had more potential readership than the Washington Post," says Lamo. "It could have caused a lot of people who were interested in the days events a lot of unwarranted grief if false and misleading information had been put up."

Proxy problems
Yahoo! declined to comment on the specifics of the hack, but as described by Lamo, modifying the portal's news stories didn't require much hacking. He made the changes using an ordinary web browser, and didn't need to do so much as enter a password.

The culprit in this case was a trio of proxy web servers that bridged Yahoo!'s internal corporate network to the public Internet. By configuring a web browser to go through one of the proxies, anyone on the Internet could masquerade as a Yahoo! insider, says Lamo, winning instant trust from the company's web-based content management system.

The hacker criticized the web giant for not prioritizing security on the systems that allow editing and creation of news stories.

"There are more secure parts of their network," says Lamo. "It's more difficult to get into their advertising reporting statistics than their news production tools."

The hacker has a history of exposing the security foibles of corporate behemoths. Last year he helped expose a bug that was allowing hackers to take over AOL Instant Messenger (AIM) accounts. And in May, he warned troubled broadband provider Excite@Home that its customer list of 2.95 million cable modem subscribers was accessible to hackers.

Lamo's hobby is a risky one. Unlike the software vulnerabilities routinely exposed by 'white hat' hackers, the holes Lamo goes after are specific to particular networks, and generally cannot be discovered without violating U.S. computer crime law. With every hack, Lamo is betting that the target company will be grateful for the warning, rather than angry over the intrusion.

"I can't give you an exact answer why he does that," says Matthew Griffiths, a computer security worker and a long-time friend of Lamo. "He's kind of a superhero of the Internet."

"I agree that it's not the safest thing I could be doing with my time," says Lamo. "If they prosecute me, they prosecute me."

Maintaining Privacy in a Cloud: More Secure Environment for Cloud Computing

Scientists at the University of Texas in Dallas, with funding from AFOSR's Multidisciplinary University Research Initiative, are seeking solutions for maintaining privacy in a cloud, or an Internet-based computing environment where all resources are offered on demand.

Dr. Bhavani Thuraisingham has put together a team of researchers from the UTD School of Management and its School of Economics, Policy and Political Sciences to investigate information sharing with consideration to confidentiality and privacy in cloud computing.

"We truly need an interdisciplinary approach for this," she said. "For example, proper economic incentives need to be combined with secure tools to enable assured information sharing."

Thuraisingham noted that cloud computing is increasingly being used to process large amounts of information. Because of this increase, some of the current technologies are being modified to be useful for that environment as well as to ensure security of a system.

To achieve their goals, the researchers are inserting new security programming directly into software programs to monitor and prevent intrusions. They have provided additional security by encrypting sensitive data that is not retrievable in its original form without accessing encryption keys. They are also using Chinese Wall, which is a set of policies that give access to information based on previously viewed data.

The scientists are using prototype systems that can store semantic web data in an encrypted form and query it securely using a web service that provides reliable capacity in the cloud. They have also introduced secure software and hardware attached to a database system that performs security functions.

Assured information sharing in cloud computing is daunting, but Thuraisingham and her team are creating both a framework and incentives that will be beneficial to the Air Force, other branches of the military and the private sector.

The next step for Thuraisingham and her fellow researchers is examining how their framework operates in practice.

"We plan to run some experiments using online social network applications to see how various security and incentive measures affect information sharing," she said.

Thuraisingham is especially glad that AFOSR had the vision to fund such an initiative that is now becoming international in its scope.

"We are now organizing a collaborative, international dimension to this project by involving researchers from Kings College, University of London, University of Insubria in Italy and UTD related to secure query processing strategies," said AFOSR program manager, Dr. Robert Herklotz.

Is your flashy school website safe?

Is your flashy school website safe?

Hackers for Charity – Help us get video content!

Hackers for Charity – Help us get video content!

Ready Generation X hacking tools wait .....................

Ready Generation X hacking tools wait .....................

Ultimate Hacker Sticker Collection on eBay

Anyone that supports 501(c)(3) not-for-profit organizations that promote security. “Huh?!” you say? All proceeds of this auction are being donated to the Open Security Foundation (OSF), maintainers of the Open Source Vulnerability Database and the DatalossDB project.

Anyone who likes stickers should bid. Bosses, get them for your employees. Security types, get them for your laptops, hacker spaces or your local neighborhood cards that need spicing up. Collectors get them to satisfy that irrational need to collect odd things.

Get the full details here at Attrition.org: http://attrition.org/news/content/stickers/. There are 250 stickers including an HNN lot and lots of extras as the bids get higher and higher.

How to change Welcome/Logon screen background without the use of any other app/software?

Now you can change Welcome/logon screen’s background in Windows 7 and its supported officially. Microsoft has been made a change to customize or ability to load JPG images as background of the welcome/logon screen without using any third party software.

Although the functionality was designed for ease of OEMs to set their own backgrounds to Welcome/logon screen. It can be done by changing few registry settings and you can set your own background.

How to-

1. Run Registry Editor and navigate to:HKLM\Software\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background
2. Now create a DWORD value called OEMBackground and set it to 1
3. Now copy your image file (JPG file) into %windir%\system32\oobe\info\backgrounds and rename it tobackgroundDefault.jpg (By default info & backgrounds folders don’t exist, so create them.)

NOTE: Image must be less than 256kb in size.

Tuesday, September 14, 2010

Leaderboard: The 10 best Android smartphones

1. HTC EVO 4G

Pound-for-pound and feature-by-feature, there’s still nothing out there in Android land that can hang with the HTC EVO 4G. With its 4.3-inch WVGA screen, 8 megapixel camera, 1 GHz Snapdragon CPU, front-facing VGA camera, Micro HDMI port, 3G Wi-Fi hotspot, and 4G WiMAX capability, the EVO has it all. And, with its large on-screen keyboard and handy kickstand for watching video, it’s a device that’s easy and pleasant to use. When I reviewed it, I called the EVO “The Hummer of smartphones” because it’s so huge and it’s such a power hog, but there’s no denying that it is the elite device of the Android fleet.
2. Google Nexus One

This was the first Android device that really knocked my socks off, and I still use it as the gold standard to measure every other Android smarty. Sure, it doesn’t have the best battery life and its screen isn’t as big and bold as the HTC EVO or the Droid X, but it is remarkably elegant and usable and it remains the one Android phone untarnished by the mobile manufacturers and telecom carriers. Google no longer sells it on the mass market but offers the N1 as a testing phone for Android developers. Still, as I said, it remains the gold standard and as long as Google keeps selling it in one form or another, it will likely remain on this list.
3. Samsung Vibrant

The Samsung Vibrant snuck up on a lot of people. Samsung hadn’t produced many good smartphones in recent years. In fact, the Samsung Omnia was so bad that I rated it as one of the worst tech products of 2009. So when Samsung announced the Galaxy S, its first line of Android devices, expectations were fairly low. But, despite the marketing confusion of naming the Galaxy S something different (and giving it a slightly different configuration) on every carrier, the product has been a big hit, selling over a million units in its first 45 days on the market. The best of the Galaxy S models is T-Mobile’s Samsung Vibrant, which is thin, powerful, has a great screen, and does the least amount of fiddling with the stock Android OS.
4. HTC Incredible

One of the most anticipated Android devices of 2010 was the Google Nexus One on Verizon. Unfortunately, it never happened — partly because Verizon dragged its feet to allow the unlocked Nexus One on its network and partly because Google was unprepared to handle the customer service responsibilities for the Nexus One. As a result, the maker of the Nexus One, HTC, released a very similar device called the HTC Incredible (sometimes referred to as the “Droid Incredible”). It’s not quite as elegant or high-end as the Nexus One, but the Incredible is the next best thing.
5. Motorola Droid X

With Sprint’s HTC EVO 4G drawing much of the attention of the Android world since its unveiling at CTIA 2010 in March, the response from Motorola and Verizon (the previous darlings of the Android world) was the Droid X. It matched the HTC EVO with a 4.3-inch screen, an 8 megapixel camera, a Micro HDMI port, and mobile hotspot functionality, but it lacked a front-facing camera, 4G connectivity, and the extra polish that HTC puts on Android with its Sense UI.
6. Samsung Epic 4G

This version of the Samsung Galaxy S is the one that departs most significantly from the standard form factor. That’s mostly because it integrates a full 53-key slide-down hardware keyboard. But it’s not just any keyboard. With it’s large keys and dedicated row for number keys, it is arguably the best hardware qwerty on any Android device. It also features a 4-inch Super AMOLED screen, a zippy 1 GHz Samsung processor, and Sprint’s 4G WiMAX service. I could certainly make a case for ranking this phone as high as number three on this list.
7. Motorola Droid 2

The fact that this phone is all the way down at number seven on this list is an indication of just how competitive the Android market has become, because this is an excellent smartphone. The original Droid really kick-started the Android revolution and remained one of the best-selling Android devices on the market throughout the first half of 2010. The Droid 2 simply updates the design slightly, improves the keyboard, and replaces the internals with more powerful hardware. For those who prefer a physical keyboard and Verizon’s top-notch coverage, the Droid 2 remains a great choice.
8. Samsung Captivate

The other Samsung Galaxy S to make this list is AT&T’s Samsung Captivate, which has virtually all of the same internals and specs as the Samsung Vibrant but has a flatter, boxier form factor. The thinness of the Captivate combined with lots of punch and high-end features make this a very attractive phone. I actually prefer the design of the Captivate over its cousin the Vibrant (No. 3 on this list). However, AT&T has loaded it up with a ton of AT&T crapware that users cannot uninstall, and even worse, has restricted the device so that users can’t “side-load” apps that are not in the Android Market. T-Mobile doesn’t commit either of those two sins with the Vibrant, and that’s what makes it a better choice.
9. HTC Aria

The HTC Aria might be one of the best kept secrets of the Android world. HTC could have honestly named this phone the EVO Mini. It looks a lot like the EVO, but in a far smaller package. In fact, while the EVO is the biggest Android phone, the Aria is the most compact, with its 3.2-inch screen. That’s its primary appeal — along with a low price tag (it retails for $129 but you can usually find it for much less than that, even free, based on promotions). The biggest problems with the Aria are the underpowered 600 MHz CPU and the fact that, like the Galaxy S, AT&T has loaded it up with lots of crapware and limited it to only the applications in the Android Market.
10. LG Ally

The LG Ally is not very pretty — except for being pretty underpowered — but it does have a few redeeming qualities that can make it attractive. It has a great little hardware keyboard — the best hardware keyboard on an Android device next to the Epic 4G. It’s also very compact, though not as compact as the HTC Aria, since the Ally has the slider keyboard that makes it a little more bulky. But, the best feature is the price: $49. And, like the Aria, many customers will get it for free with the right promotion. For 50 dollars or less, this phone is a nice value.

Answers to some common questions about symbolic links

What makes symbolic links different from standard shortcuts?

The one question that a lot of folks asked is “What’s the difference between a symbolic link and a standard shortcut? They both seem to do the same thing.”

Well, standard shortcuts and symbolic links do, in fact, perform a similar function, but there are several differences. To begin with, a symbolic link is a pointer that works at the file-system level as opposed to a shortcut, which is a pointer designed to work within explorer.exe. Since a symbolic link is essentially grafted to the file system, it doesn’t have a footprint, so to speak, whereas a shortcut is an actual file on the hard disk.

Take a look at the Properties dialog boxes shown in Figure A. As you can see, the shortcut is an actual file that takes up 4KB of disk space. The symbolic link uses 0 bytes.

Figure A


Shortcuts are files, and symbolic links are part of the file system.

Another difference is that a shortcut is fundamentally a one-shot deal, while a symbolic link has a sustained existence. To see this in action, let’s suppose that you use the MKLink command

Mklink /J C:\CurrentWork “C:\Users\Greg Shultz\My Documents\Articles\TechRepublic\2010\9) September 10\9-3″

to create the C:\CurrentWork symbolic link folder that points to the path:

C:\Users\Greg Shultz\My Documents\Articles\TechRepublic\2010\9) September 10 \9-3

Then, you use the Create Shortcut wizard to create a shortcut called CurrentWork Shortcut that points to the same path.

If you double-click the CurrentWork Shortcut, you’ll see that the shortcut will deliver you to the 9-3 folder, but if you double-click the CurrentWork symbolic link, you’ll see the operating system makes it appear that the files actually exist in the CurrentWork folder, as shown in Figure B. The shortcut has done its job and is gone, while the symbolic link continues working.


Figure B

Once a shortcut does its job, it’s gone, but a symbolic link continues working.

(This also works from the Save and Open dialog boxes of your applications. The efficiency improvement then comes from the fact that no matter where you are, all you have to remember is the name of the symbolic link.)

If you work from the command prompt, you’ll discover that you can access symbolic link folders on the command line, as shown in Figure C. You can’t really use a shortcut from the command line.

Figure C

You can access symbolic link folders from the command prompt.

Windows Vista and Windows 7 have built-in symbolic links


Many other folks wrote in to ask why they couldn’t use many of Windows Vista’s and Windows 7’s built-in symbolic links. For example, if you try to access the C:\Documents and Settings symbolic link folder, you’ll see an error message like the one shown in Figure D.

Figure D

You’ll encounter an Access is Denied error message from some built-in symbolic links.

To begin with, under normal circumstances, you wouldn’t even see the operating system’s built-in symbolic links, unless you enable the Show Hidden Files, Folders and Drives option on the View tab of the Folder Options dialog box. Unfortunately, many folks do so and thus end up trying to use the built-in symbolic links. However, these symbolic links are not designed for end users; they’re designed to provide backward compatibility for older applications.

Windows Vista and Windows 7 have two types of built-in symbolic links designed for backward compatibility called System Junctions and Per-User Junctions.

An example of a System Junction is C:\Documents and Settings. In Windows XP there is an actual folder called Documents and Settings that contains the user profile folders. In Windows Vista and Windows 7, the user profile folders are stored in a folder called C:\Users, as shown in Figure E.


Figure E

User profiles in Windows Vista and Windows 7 are stored in the C:\Users folder.

However, in order to be backward compatible with older applications that are hard-coded to look for and use the C:\Documents and Settings folder in order to access user profiles, both Windows Vista and Windows 7 create a C:\Documents and Settings symbolic link folder that actually points to C:\Users. This allows the older application to think that it is using the C:\Documents and Settings folder when it is actually using the more streamlined C:\Users folder.

An example of a Per-User Junction is C:\Users\Greg Shultz\My Documents. In Windows XP there is an actual folder called My Documents. In Windows Vista and Windows 7, that folder is now called Documents. In order to be backward compatible with older applications that are hard-coded to look for and use the My Documents folder to open and save files, both Windows Vista and Windows 7 create a My Documents symbolic link.

What’s your take?

Do these answers give you a better understanding of how symbolic links work in Windows Vista and Windows 7?

How do I delete all tab stops in a Word document?

Tab stops are a paragraph format. That means you can assign different tab stops for individual paragraphs. Fortunately, that doesn’t mean you have to delete them all individually—or by the paragraph. There’s a quick, easy way to delete all the tab stops in a document.

First, select the entire document. There are a number of ways to do so, but the quickest is to press [Ctrl]+a. With the entire document selected, do the following to delete all tab stops:

1. Choose Paragraph from the Format menu. Or, right-click the selection and choose Paragraph from the resulting context menu. In Word 2007 and 2010, click the Home tab | Paragraph group Dialog launcher.
2. Click Tab (at the bottom-left).
3. In the Tabs dialog box, click the Clear All button at the bottom-right.
4. Click OK.

The truth is, deleting all the tabs is a simple enough task, but the feature’s placement isn’t particularly easy to find, in any version.

Of course, you won’t always want to delete all of the tab stops. You can clear individual tab stops as follows:

1. Position the insertion point in the appropriate paragraph.
2. Choose Paragraph from the Format menu. Or, right-click the selection and choose Paragraph from the resulting context menu. In Word 2007 and 2010, click the Home tab | Paragraph group Dialog launcher.
3. Click Tabs.
4. In the Tab Stop Position control, highlight the tab stop you want to delete.
5. Click Clear.

Some might consider using the ruler a bit easier:

1. Position the insertion point in the appropriate question.
2. Find the tab indication on the ruler and drag it off–it couldn’t be simpler!

Two quick tips for inserting and formatting tabs

As long as we’re talking about the ruler, there are two quick ruler tips that I really like:

* Double-click an existing tab on the ruler to display the Tab dialog box for quick formatting.
* Double-click the ruler where you want to insert a tab to both insert a tab and open the Tab dialog box.

Is engineering now a young man's game?

Recruiters at Silicon Valley companies lament that in the U.S. there is a shortage of qualified engineers. But unemployment figures show a different picture. So what’s the deal? According to a piece by Vivek Wadhwa for TechCrunch, the truth of the situation is that tech companies prefer to hire young, inexperienced engineers rather than shell out the money for a seasoned veteran.

The thinking is that you can get a new programmer for about a third of the salary of an experienced programmer. Even if takes a few weeks for the new programmer to get trained, the company still saves money. Though they wouldn’t publicly admit it, some companies prefer to get someone who is more eager with a “clean slate” that they can train as they want than hire someone with years of acquired knowledge.

Wadhwa’s article talks about a new book called Chips and Change by University of California, Berkeley Professors Clair Brown and Greg Linden. The authors of the book cite Bureau of Labor Statistics and census data for the semiconductor industry and found that:

* Salaries increased dramatically for engineers during their 30s but the increases slowed after the age of 40.
* Over age 40, salaries started dropping, dependent on the level of education.
* After 50, the mean salary of engineers was lower-by 17% for those with bachelors degrees, and by 14% for those with masters degrees and PhDs-than the salary of those younger than 50.

Wadhwa’s advice for older workers is to move into management and/or keep their skills current.

Save on bandwidth with the Squid proxy server

Squid is a caching proxy for the web that supports HTTP, HTTPS, FTP and more. Its distinct advantages are caching frequently-requested pages to speed up web page load times and also reducing bandwidth by not having to re-request the same page over and over again. It can also be used as a reverse proxy to accelerate web servers by serving up cached content rather than permitting continuous hits to the web server for identical content to multiple clients.

To illustrate how to quickly set up Squid as a caching proxy, Fedora 13 currently provides a very recent Squid 3.1.4 and is easy to install:

# yum install squid

Out-of-the-box, Squid will work as a web client proxy for the local host and local network. What you want to do is edit /etc/squid/squid.conf and look for the “localnet” entries, to comment out those networks that are not on your local network. For instance, if you use a 192.168 network at home, comment out the 10.0.0.0 and 172.16.0.0 lines:

#acl localnet src 10.0.0.0/8 # RFC1918 possible internal network

#acl localnet src 172.16.0.0/12 # RFC1918 possible internal network

acl localnet src 192.168.0.0/16 # RFC1918 possible internal network

Next, start the Squid service. If you have a firewall enabled on the system, be sure to allow TCP access to port 3128.

At this point, you can test by using a command line browser on the local system by doing:

$ http_proxy="http://localhost:3128" elinks http://foo.com/

And then look at the /var/log/squid/access.log file. If the browser did not complain about not being able to connect, and the log files show activity, then you have successfully set up Squid. The logs will look something like this:

1281203766.589 2626 ::1 TCP_MISS/200 18137 GET http://foo.com/ - DIRECT/1.1.1.1 text/html

1281203767.186 595 ::1 TCP_MISS/200 4867 GET http://foo.com/skins/common/
commonPrint.css? - DIRECT/1.1.1.1 text/css

If you were to execute the same browser command again, you would see the following:

1281204000.528 313 ::1 TCP_MISS/200 18137 GET http://foo.com/ - DIRECT/1.1.1.1 text/html

1281204000.591 60 ::1 TCP_REFRESH_UNMODIFIED/200 4873 GET http://foo.com/skins/common/
commonPrint.css? - DIRECT/1.1.1.1 text/css

This shows you the cache at work. The initial page is loaded again, but the CSS file is sent to the requesting browser using the cached copy. The next step is to try the same from another system that would also be using the cache (you can easily use the same command line browser command if available).

If you want to have a transparent proxy setup, so that no one will know the proxy is in use and cannot circumvent it, you can easily do so by adjusting iptables rules. If your firewall system is running Linux, this is easily accomplished. Note that if you do use a transparent proxy, you cannot use authentication on the proxy. If these aren’t important to you, setting up a transparent proxy is a fast and easy way to force everyone on the network to use it.

In /etc/squid/squid.conf you want to uncomment the “cache_dir” directive:

# Uncomment and adjust the following to add a disk cache directory.

cache_dir ufs /var/spool/squid 7000 16 256

and change

http_port 3128

to

http_port 3128 transparent

Once these changes have been made and Squid has been restarted, you also need to change the firewall rules for your network’s firewall or gateway system by redirecting all output HTTP traffic to the proxy. This can be tricky, depending on whether or not your Squid install is on the firewall system or if it’s a separate system in the local network. It also depends on your firewall’s software. The Squid wiki has a section on Interception (i.e. transparent proxies) and how to set them up with Cisco devices, Linux, FreeBSD, and OpenBSD.

That same wiki page also has other example configurations. Squid can be used for more than just web page caching, and there are examples there on how to use it for Instant Message filtering, using it as a reverse proxy to cache web page requests on a web server, how to set it up with various forms of authentication, etc.

Squid is very versatile and can do quite a lot. For large organizations, Squid offers a surprisingly easy way to save on bandwidth, as well as provides an easy way to force authentication to be required in order to obtain outbound access to traffic. For simple web caching, Squid is pretty much ready to run as-is, and the wiki offers a lot of examples and help if you need to consider something a little more complex.