Sunday, August 29, 2010

Upgrading Existing TFS 2008 Project to support New features of TFS 2010

Hi as a part of upgrading our Team Foundation server 2008 to 2010, I was upgrading our Team Project to support new Team Testing features available with Visual Studio Ultimate and Visual Studio Team Test Editions.

The following blog link was helpful.

http://blogs.msdn.com/chrispat/archive/2009/10/19/enabling-test-management-on-upgraded-team-projects-beta-2.aspx

but this provided info about Beta 2 of TFS 2010. Although most of the information is still applicable. I had to devise some workarounds to a few steps.

1. I downloaded the v5 Process Template from our new Team Foundation Server.

2. I Imported new Link Types as mentioned in the blog.

3. As soon as i started importing new Work Item Types. I was shown an error like. this

AreaId cannot be renamed to Area Id.

I tried searching on bing and was taken to a Microsoft connect link. But there was not much information available. So had to find out the reason and answer myself.

I modified the TestCase.XML and SharedStep.XML. Changed Area Id to AreaId in Both the XML Files and Integration Id to IterationId in SharedStep.XML.

And the 3rd Step was Successful after this.

4. Then i Imported the New Categories as mentioned in the document in Chirs patterson’s blog post.

5. While Extending the existing types Bug and Scenario/Requirement, Instead of exporting and then importing the modified. I directly imported the file from the process template I downloaded in First Step.

Everything now seems to work perfectly.

Reasons why I am writing this blog

1. The document in that blog specifies how to upgrade the agile process project only.
2. I faced small issue so others might also face same issues and now they will have some solution.
3. I have updated the Microsoft support connect team and told them to check this blog.

10 things you should know about NoSQL databases

The relational database model has prevailed for decades, but a new type of database — known as NoSQL — is gaining attention in the enterprise. Here’s an overview of its pros and cons.

For a quarter of a century, the relational database (RDBMS) has been the dominant model for database management. But, today, non-relational, “cloud,” or “NoSQL” databases are gaining mindshare as an alternative model for database management. In this article, we’ll look at the 10 key aspects of these non-relational NoSQL databases: the top five advantages and the top five challenges.

Five advantages of NoSQL

1: Elastic scaling

For years, database administrators have relied on scale up — buying bigger servers as database load increases — rather than scale out — distributing the database across multiple hosts as load increases. However, as transaction rates and availability requirements increase, and as databases move into the cloud or onto virtualized environments, the economic advantages of scaling out on commodity hardware become irresistible.

RDBMS might not scale out easily on commodity clusters, but the new breed of NoSQL databases are designed to expand transparently to take advantage of new nodes, and they’re usually designed with low-cost commodity hardware in mind.

2: Big data

Just as transaction rates have grown out of recognition over the last decade, the volumes of data that are being stored also have increased massively. O’Reilly has cleverly called this the “industrial revolution of data.” RDBMS capacity has been growing to match these increases, but as with transaction rates, the constraints of data volumes that can be practically managed by a single RDBMS are becoming intolerable for some enterprises. Today, the volumes of “big data” that can be handled by NoSQL systems, such as Hadoop, outstrip what can be handled by the biggest RDBMS.

3: Goodbye DBAs (see you later?)

Despite the many manageability improvements claimed by RDBMS vendors over the years, high-end RDBMS systems can be maintained only with the assistance of expensive, highly trained DBAs. DBAs are intimately involved in the design, installation, and ongoing tuning of high-end RDBMS systems.

NoSQL databases are generally designed from the ground up to require less management: automatic repair, data distribution, and simpler data models lead to lower administration and tuning requirements — in theory. In practice, it’s likely that rumors of the DBA’s death have been slightly exaggerated. Someone will always be accountable for the performance and availability of any mission-critical data store.

4: Economics

NoSQL databases typically use clusters of cheap commodity servers to manage the exploding data and transaction volumes, while RDBMS tends to rely on expensive proprietary servers and storage systems. The result is that the cost per gigabyte or transaction/second for NoSQL can be many times less than the cost for RDBMS, allowing you to store and process more data at a much lower price point.

5: Flexible data models

Change management is a big headache for large production RDBMS. Even minor changes to the data model of an RDBMS have to be carefully managed and may necessitate downtime or reduced service levels.

NoSQL databases have far more relaxed — or even nonexistent — data model restrictions. NoSQL Key Value stores and document databases allow the application to store virtually any structure it wants in a data element. Even the more rigidly defined BigTable-based NoSQL databases (Cassandra, HBase) typically allow new columns to be created without too much fuss.

The result is that application changes and database schema changes do not have to be managed as one complicated change unit. In theory, this will allow applications to iterate faster, though,clearly, there can be undesirable side effects if the application fails to manage data integrity.

Five challenges of NoSQL

The promise of the NoSQL database has generated a lot of enthusiasm, but there are many obstacles to overcome before they can appeal to mainstream enterprises. Here are a few of the top challenges.

1: Maturity

RDBMS systems have been around for a long time. NoSQL advocates will argue that their advancing age is a sign of their obsolescence, but for most CIOs, the maturity of the RDBMS is reassuring. For the most part, RDBMS systems are stable and richly functional. In comparison, most NoSQL alternatives are in pre-production versions with many key features yet to be implemented.

Living on the technological leading edge is an exciting prospect for many developers, but enterprises should approach it with extreme caution.

2: Support

Enterprises want the reassurance that if a key system fails, they will be able to get timely and competent support. All RDBMS vendors go to great lengths to provide a high level of enterprise support.

In contrast, most NoSQL systems are open source projects, and although there are usually one or more firms offering support for each NoSQL database, these companies often are small start-ups without the global reach, support resources, or credibility of an Oracle, Microsoft, or IBM.

3: Analytics and business intelligence

NoSQL databases have evolved to meet the scaling demands of modern Web 2.0 applications. Consequently, most of their feature set is oriented toward the demands of these applications. However, data in an application has value to the business that goes beyond the insert-read-update-delete cycle of a typical Web application. Businesses mine information in corporate databases to improve their efficiency and competitiveness, and business intelligence (BI) is a key IT issue for all medium to large companies.

NoSQL databases offer few facilities for ad-hoc query and analysis. Even a simple query requires significant programming expertise, and commonly used BI tools do not provide connectivity to NoSQL.

Some relief is provided by the emergence of solutions such as HIVE or PIG, which can provide easier access to data held in Hadoop clusters and perhaps eventually, other NoSQL databases. Quest Software has developed a product — Toad for Cloud Databases — that can provide ad-hoc query capabilities to a variety of NoSQL databases.

4: Administration

The design goals for NoSQL may be to provide a zero-admin solution, but the current reality falls well short of that goal. NoSQL today requires a lot of skill to install and a lot of effort to maintain.

5: Expertise

There are literally millions of developers throughout the world, and in every business segment, who are familiar with RDBMS concepts and programming. In contrast, almost every NoSQL developer is in a learning mode. This situation will address naturally over time, but for now, it’s far easier to find experienced RDBMS programmers or administrators than a NoSQL expert.
Conclusion

NoSQL databases are becoming an increasingly important part of the database landscape, and when used appropriately, can offer real benefits. However, enterprises should proceed with caution with full awareness of the legitimate limitations and issues that are associated with these databases.

Direct Computing Experience Platform With Windows 8

That big Windows 8 leak and some patent applications show what Microsoft planned for Windows 8, the recent finding is titled ‘Direct Computing Experience’. The patent goes describes that how sometimes a laptop is used for watching movies on DVD but since the laptop is more than the DVD players available in the market, the experience is different.
Microsoft plans to make the computer go into reduced functionality mode or in a sand boxed mode in case you plan to use it for a specific consumer electronic applications, such as the DVD player. Quoting the patent application summary:
Briefly, various aspects of the subject matter described herein are directed towards launching a computing device into a special computing experience (referred to as a direct experience) upon detection of a special actuation mechanism coupled to the computing device. For example, a dedicated button, a remote control device, and so forth may trigger a different operating mode, such as by launching a particular application program. The special actuation mechanism may instead (or additionally) cause the device to be operated in a constrained, or sandbox mode, in which only limited actions may be taken, e.g., as defined by a manufacturer or end user.
The idea is that you might not need to login and wait for your entire desktop to load and then start media player and then start the movie. This process can be automated and initiated by a click of the button.
The patent images show that you can push a button, the system will look for media to play (possible scenario a movie DVD in the drive) and then start playing without you having to login to the system. Some flowcharts from the patent application explain this better:
















This sounds like a solution to the battery problems in laptops, reduced functionality mode will definitely improve battery life. This is quite interesting, Device Stage, Sideshow and now Direct Experience.

Invoking C# Compiler from your Code

Imagine you have just generated some code using a code generation technique, wouldn’t it be really cool to then programmatically call the C# language compiler and generate assemblies from the generated code?
Let’s imagine we used an Xslt transformation to generate some code:

XslCompiledTransform transform = new XslCompiledTransform();
transform.Load("Program.xslt");
transform.Transform("Program.xml", "Program.cs");

Using the CSharpCodeProvider class we can now programmatically call the C# compiler to generate the executable assembly. We’ll begin by advising the compiler of some options we’d like it to use when compiling our code using the CompilerParameters class.

CompilerParameters parameters = new CompilerParameters();

We’ll specify that the assembly to be created should be an executable and we’ll also specify such things as whether debug information should be included, the warning level, whether warnings should be treated as errors etc.

parameters.GenerateExecutable = true;
parameters.IncludeDebugInformation = true;
parameters.GenerateInMemory = false;
parameters.TreatWarningsAsErrors = true;
parameters.WarningLevel = 3;
parameters.CompilerOptions = "/optimize";
parameters.OutputAssembly = "Program.exe";

Using the compiler parameters we can now compile the C# code using the CSharpCodeProvider class and get the results of the compilation as an instance of the CompilerResults class.

CSharpCodeProvider codeProvider = new CSharpCodeProvider(); CompilerResults results = codeProvider.CompileAssemblyFromFile(parameters, new string[] { "Program.cs" });

While the code you generated is unlikely to have any compiler warnings or errors, other developers may be less fortunate and therefore they can access the Errors property of the CompilerResults class to determine what went wrong. Actually the Errors property contains both compiler errors and warnings although the following simple LINQ queries allow you to examine the warnings and errors in isolation.


var warnings = from e in results.Errors.Cast()
where e.IsWarning
select e;
var errors = from e in results.Errors.Cast()
where !e.IsWarning
select e;

While code generation is very cool, we generate code to ultimately create assemblies that we can either execute or reference from other code. If you’re programmatically generating code why not next time also programmatically generate the assemblies too!

Thursday, August 26, 2010

Get website metrics with AWStats

Log file analysis is interesting. Seriously, who doesn’t want to know how many hits they get on their website, what page or file is most popular, what operating systems or browsers are visiting, or from what country the majority of visitors is coming from? I think we would all agree that anyone who runs a website likes to know this kind of information.

AWStats is a best-of-breed log file analysis program. Primarily, it analyzes log files for web servers: Apache, IIS, and others, including proxy servers such as Squid. Interestingly enough, it can also be used to analyze log files for FTP servers and mail servers.

Interested yet? Free information! I’m an information nut and love looking at or making up statistics, so AWStats suits me quite well. Written in perl, AWStats is probably the most widely used log analysis program. It can be run in real time as a CGI script, or can be run periodically from cron to provide static pages. Running AWStats every few hours is generally enough to keep the overhead down, but if you look at it rarely, running it as a CGI might be a better fit. Whichever works best for you, AWStats can accommodate.

The current version of AWStats is 6.95, and it can be downloaded from the home page as a tar file, zip file, or noarch RPM file. If you run CentOS or Red Hat Enterprise Linux and have the RPMForge third-party repository setup, you can use yum to install the latest version of AWStats, likewise with Fedora. For the Debian and Ubuntu users, AWStats is available via apt-get or Aptitude.

Once AWStats is installed, you should be able to immediately get the CGI to load. There might not be a lot there as of yet, but it should load. If AWStats is located in /var/www/awstats/, set the following in your directive for the domain you wish to view:

Alias /awstats/icon/ /var/www/awstats/icon/

ScriptAlias /awstats/ /var/www/awstats/



DirectoryIndex awstats.pl

Options ExecCGI

order deny,allow

deny from all

allow from 192.168



Then you should be able to visit http://www.yourdomain.com/awstats/awstats.pl and be given a good healthy error. This is due to the fact that no configuration has been done as of yet, but we can circumvent this and use the default “localhost.localdomain” configuration file that is present by visiting http://www.yourdomain.com/awstats/awstats.pl?config=localhost.localdomain instead. (This is assuming that you use the RPMForge package; if you grab the tar or zip file, you need to create this file and move the files into place first — you can do this by creating /etc/awstats/ and copying awstats-6.95/wwwroot/cgi-bin/awstats.model.conf from the distribution archive to /etc/awstats/awstats.localhost.localdomain.conf.)

From there you can also copy that file to /etc/awstats/awstats.www.domain.com.conf as well, to view the statistics for the chosen domain name. You will want to edit the file and at a bare minimum, set the log file to examine:

LogFile="/var/log/httpd/intranet-access_log"

SiteDomain="domain.com"

HostAliases="www.domain.com"

This would tell AWStats that for this configuration file, /var/log/httpd/intranet-access_log is the log file to parse. Before firing up the new URL, however, you need to update the database, which can be done by creating /etc/cron.hourly/awstats with the following contents:

#!/bin/bash

if [ -f /var/log/httpd/access_log ] ; then

exec /usr/bin/awstats_updateall.pl now -confdir="/etc" -awstatsprog="/var/www/awstats/awstats.pl" >/dev/null 2>&1

fi

exit 0

The above assumes certain path locations for where scripts have been installed. The awstats_updateall.pl script is in the awstats-6.95/tools/ directory if you downloaded the tar or zip files. This script will also run hourly due to its placement in /etc/cron.hourly/, to keep the database updated.

Now you can visit http://www.domain.com/awstats/awstats.pl?config=www.domain.com and view the AWStats statistics page.

Getting AWStats to parse mail and FTP logs is just as easy, and the online documentation is quite helpful (and the configuration files are very heavily commented).

AWStats provides a lot of statistics in its pages. The information it provides can really provide some insight as to how other people view your site, and who they are. For those looking to improve or maximize their site according to viewer demographics, AWStats information can prove to be invaluable.

10 tips for effective Active Directory design

Active Directory design is a science, and it’s far too complex to cover all the nuances within the confines of one article. But I wanted to share with you 10 quick tips that will help make your AD design more efficient and easier to troubleshoot and manage.

1: Keep it simple

The first bit of advice is to keep things as simple as you can. Active Directory is designed to be flexible, and if offers numerous types of objects and components. But just because you can use something doesn’t mean you should. Keeping your Active Directory as simple as possible will help improve overall efficiency, and it will make the troubleshooting process easier whenever problems arise.
2: Use the appropriate site topology

Although there is definitely something to be said for simplicity, you shouldn’t shy away from creating more complex structures when it is appropriate. Larger networks will almost always require multiple Active Directory sites. The site topology should mirror your network topology. Portions of the network that are highly connected should fall within a single site. Site links should mirror WAN connections, with each physical facility that is separated by a WAN link encompassing a separate Active Directory site.
3: Use dedicated domain controllers

I have seen a lot of smaller organizations try to save a few bucks by configuring their domain controllers to pull double duty. For example, an organization might have a domain controller that also acts as a file server or as a mail server. Whenever possible, your domain controllers should run on dedicated servers (physical or virtual). Adding additional roles to a domain controller can affect the server’s performance, reduce security, and complicate the process of backing up or restoring the server.
4: Have at least two DNS servers

Another way that smaller organizations sometimes try to economize is by having only a single DNS server. The problem with this is that Active Directory is totally dependent upon the DNS services. If you have a single DNS server, and that DNS server fails, Active Directory will cease to function.
5: Avoid putting all your eggs in one basket (virtualization)

One of the main reasons organizations use multiple domain controllers is to provide a degree of fault tolerance in case one of the domain controllers fails. However, this redundancy is often circumvented by server virtualization. I often see organizations place all their virtualized domain controllers onto a single virtualization host server. So if that host server fails, all the domain controllers will go down with it. There is nothing wrong with virtualizing your domain controllers, but you should scatter the domain controllers across multiple host servers.
6: Don’t neglect the FSMO roles (backups)

Although Windows 2000 and every subsequent version of Windows Server have supported the multimaster domain controller model, some domain controllers are more important than others. Domain controllers that are hosting Flexible Single Master Operations (FSMO) roles are critical to Active Directory health. Active Directory is designed so that if a domain controller that is hosting FSMO roles fails, AD can continue to function — for a while. Eventually though, a FSMO domain controller failure can be very disruptive.

I have heard some IT pros say that you don’t have to back up every domain controller on the network because of the way Active Directory information is replicated between domain controllers. While there is some degree of truth in that statement, backing up FSMO role holders is critical.

I once had to assist with the recovery effort for an organization in which a domain controller had failed. Unfortunately, this domain controller held all of the FSMO roles and acted as the organization’s only global catalog server and as the only DNS server. To make matters worse, there was no backup of the domain controller. We ended up having to rebuild Active Directory from scratch. This is an extreme example, but it shows how important domain controller backups can be.
7: Plan your domain structure and stick to it

Most organizations start out with a carefully orchestrated Active Directory architecture. As time goes on, however, Active Directory can evolve in a rather haphazard manner. To avoid this, I recommend planning in advance for eventual Active Directory growth. You may not be able to predict exactly how Active Directory will grow, but you can at least put some governance in place to dictate the structure that will be used when it does.
8: Have a management plan in place before you start setting up servers

Just as you need to plan your Active Directory structure up front, you also need to have a good management plan in place. Who will administrator Active Directory? Will one person or team take care of the entire thing or will management responsibilities be divided according to domain or organizational unit? These types of management decisions must be made before you actually begin setting up domain controllers.
9: Try to avoid making major logistical changes

Active Directory is designed to be extremely flexible, and it is possible to perform a major restructuring of it without downtime or data loss. Even so, I would recommend that you avoid restructuring your Active Directory if possible. I have seen more than one situation in which the restructuring process resulted in some Active Directory objects being corrupted, especially when moving objects between domain controllers running differing versions of Windows Server.
10: Place at least one global catalog server in each site

Finally, if you are operating an Active Directory consisting of multiple sites, make sure that each one has its own global catalog server. Otherwise, Active Directory clients will have to traverse WAN links to look up information from a global catalog.

Windows security groups: To nest or not?

There are few things more important than troubleshooting a permissions issue only to find that a nested global security group is the culprit. The nesting of global security groups can cause so many issues, especially when any deny permissions come into play. Take into account any group policy-based deny permissions, and the tracing effort can be quite cumbersome.

For Active Directory domains, do you allow nested global security groups? The troubleshooting aspect of group membership is made complicated at first glance in most tools. Many tools will report effective rights, but not necessarily that they are there because of a nested group, much less a group membership at all.

I would love to say that nesting group membership is prohibited, but there are occasional situations where it makes sense. My professional administration practice has limited nested group membership with a few guiding rules:

1. Allow no more than one level of nested group membership.
2. One security group can have no more than one “member of” value.
3. The nested security group would not contain groups designated for deny permissions.
4. The nested global security group is not a high-level privilege group.

These are basic situations, but may not address every use case for a valid use case for nesting a global security group. The guiding principle in these parameters is that it is kept to a minimum, does not increase the troubleshooting burden, and reduces the risk of accidental over-permission assignments. Limiting the use of nested groups also will help prevent issues related to token size problems.

One of the best use cases is the occasional situation where you need to add a computer account to a global security group; it becomes awkward if user accounts and computer accounts are intermixed in the same security group. Another use case is when the Built-In groups (from local computer systems) are being combined with domain user accounts as a way to separate them. Nesting can make sense in those situations as well as others that may arise in your specific configuration.

Do you use any level of nested security groups? If so, share how and when you use them below.

Create a shortcut to modify a Group Policy Object

If you have ever had to go through a number of configurations to get a new set of Group Policy Objects (GPOs) working correctly, one of the biggest inconveniences is frequently going in and out of the Group Policy Editor to manipulate the policies in question. There’s an easy way to make this process a shortcut.

The first thing to understand is the globally unique identifier (GUID) that is associated with every Active Directory object. A GUID is assigned to every GPO, and determining this string is the first step. The GUID is visible in the Details tab of the GPO in question (Figure A).

Figure A


You can copy this field by right-clicking it; you can then put the field into a command string to edit the GPO. The command string launches the Group Policy Editor directly to that object in the directory. The text below is an example command string to edit the GPO shown in Figure A in my private lab:

gpedit.msc /gpobject:”LDAP://CN={0523F1BD-B9F1-469A-87B8-D28E2345BADD},CN=Policies,CN=System,DC=rwvdev,DC=intra”


It’s a little tricky to determine where to put the GUID, so I’ve taken this text example and highlighted where it goes and the domain configuration. Figure B shows where these two pieces of information go in the command string:

Figure B


The green section is where the GUID is inserted, and the yellow section is where the domain information goes (in my case, the RWVDEV.INTRA domain is enumerated).

Then you can save the string as a shortcut or run it interactively to go directly into the editor for the GPO in question. The changes are made live and saved when closed as long as the permissions are in place. This can make frequent changes much easier for testing new configurations.

KineticSecure online continuous backup

KineticD takes online backup to a new level of ease of use, convenience, and protection. Their KineticSecure offering provides continuous backup and versioning to ensure that your backups are up-to-the-minute.

Note: This review was performed based on a publicly available 14 day full feature trial of the software.

Specifications

  • Software Requirements: All versions of Windows and Windows server from Windows 95b up, or Mac OSX Leopard 10.5.7 or better
  • Windows Hardware Requirements: 5GB disk space, 512MB RAM, Pentium 4 1.7GHz CPU or better
  • Mac Hardware Requirements: 5GB disk space, 512MB RAM, PowerPC G4 1.5 GHz or any Intel CPU
  • Pricing: $2/GB per month (no user or device limit)
  • Additional Information: Product Web site

Who’s it for?

Due to the nature of continuous online backups, KineticSecure is best suited for computers which can afford extended downtime in case a full system restore is needed. It is well suited for mobile users, home usage, typical desktop users, and lightly used servers.


What problems does it solve?

Traditional backup utilities operate on a schedule, which is often cancelled by users when it makes their computer hard to use or does not happen at all because the computer is off. In addition, online backups are often interrupted by network connectivity issues. KineticSecure is constantly backing up files, so there is never a long, scheduled backup operation to slow the computer down or to be missed.

Standout features

  • Continuous Backup: The key feature here is the continuous backup, which backs up a file the moment it changes. The backup also handles open files.
  • Versioning: By default, three versions of a file are always kept available, although you may increase that up to 28 if you need to.
  • Ease of Use: KineticSecure is very easy to setup and to use; by not worrying about types of backups or schedules, much of the usual backup complexity is eliminated.
  • Pricing Plan: KineticSecure offers a simple pricing model ($2 per GB per month, with as many users and devices as you want), which ensures that you only pay for service that you need.

What’s wrong?

  • Relies on the Network: As good as KineticSecure is, it cannot avoid problems inherent in the online backup model, namely that file backup and restoration is only as fast as your Internet connection.
  • File-Level Backup: KineticSecure is not able to perform any kind of backup suitable for a bare metal restore.
  • UI Always Prompting for Password: The UI seems to be spread across numerous small utility applications and Web sites, so it feels like every few screens you need to re-enter your username and password.

Competitive products


Bottom line for business

KineticSecure (formerly Data Deposit Box; while the name has recently changed, the application still has the original name) is a very compelling entry in the online backup space. The out-of-the-box configuration is just fine for most users, but for advanced (or choosy) users, there is a very rich set of configuration options. These additional configuration choices include file locations other than “well known” locations (documents, browser bookmarks, etc.), exclusions, setting of time periods when to not perform backups, bandwidth usage, and more. The depth of configuration should satisfy even the most discriminating user or administrator.

File restoration is easy enough, with your choice of using the Web site or the local client to perform restorations. Unfortunately, this is where KineticSecure’s biggest weakness shows. KineticSecure backs up files, but there simply is no tool or capability to perform a bare metal restore. This means that to restore a system, you need to get it up and running to the point where doing a file restoration is possible.

You will probably not want to use this process to restore an application server, Active Directory server, etc. and it will certainly not be suitable for a mission-critical server (not to mention the time needed to download the files).

Likewise, a network file server in any sizable company probably has enough data changing on it daily to make the continuous protection take up an awful lot of bandwidth. For those situations, traditional on-site backup continues to rule the roost. You can also use the Web based administration to access your files remotely, which is handy, and you can provide others with a link to the files to share them.

That being said, KineticSecure is as good as it gets for online backup. The simple, functional experience from the get-go is a winner. The pricing model is outstanding and is as customer-friendly as can be imagined. If your backup needs are a good candidate for online backups, KineticSecure should be at the top of your list of applications to evaluate.

Microsoft Exchange 2010 Service Pack 1 ships

Microsoft announced general availability of the final version of Exchange 2010 Service Pack (SP) 1 on August 25.

Included in SP1 are the usual fixes and updates made to Exchange 2010 since Microsoft shipped that product in October 2009. But SP1 also includes new features and functionality, including archiving and discovery updates, Outlook Web App improvements, mobile user and management improvements and “some highly sought after additional UI for management tasks,” officials said earlier this year.

On the Outlook Web App front, SP1 will get a faster reading experience, as a result of enhancements for pre-fetching message content, according to Microsoft execs, as well as other UI-related updates that will make OWA work better on smaller netbook screens. Additionally, users will be able to share calendars with anonymous viewers via the Web (if admins enable this functionality), the Softies said.

Operating systems supporting Exchange 2010 SP1 include Windows 7 Professional 64-bit; Windows Server 2008; Windows Server 2008 Enterprise; Windows Server 2008 R2 Enterprise; Windows Server 2008 R2 Standard; and Windows Vista 64-bit Editions with Service Pack 1, according to the company.

Microsoft officials said more than 500,000 Technology Adoption Program partners had downloaded the beta, released in June.

Users can download Exchange 2010 SP1 from Microsoft’s Download Center.

Can Red Hat beat Microsoft in the cloud?

Red Hat announced a strategy for its cloud stack, now called Cloud Foundations Edition One.

It’s about portability and interoperability. In other words it’s about standards. In line with that, Red Hat has submitted its cloud platform as a potential standard for interoperability.

At the heart of the cloud movement was always this idea that you would abstract the complexity of operating systems through virtualization, thus it wouldn’t matter on what specific piece of hardware your data and programs actually lived.

Of course that’s not how computer rivalries work. There are multiple hypervisors, multiple routes to virtualization, multiple ways to manage clouds, and multiple cloud stacks.

When seen in comparison to the ideal of a fully interoperable environment open source has a distinct advantage. When you can see the code, you can link to it more easily than if you can’t. (Try it at home. Wire up your computer with your eyes open, then do it with your eyes shut.)

The cloud strategy puts Red Hat on a collision course with Microsoft, whose Azure cloud says you should trust its portability, and trust its interoperability. Just to turn things up another notch, Red Hat said it would support its business software a full 10 years, as opposed to Microsoft’s five.

Logically Red Hat’s cloud strategy should work. Red Hat is seeking to be the center of the cloud world, while larger vendors swirl around it, and when all the rushing around is done the center is where you want to be.

But the real world is not the ideal plane. Red Hat marketing is indeed Switzerland, if you want to compare the Swiss army to that of, say, Russia. Yes it’s neutral, but if it comes to a fight I’m betting on the bear. Can Red Hat succeed without being, say, bought by IBM?

That’s the risk. It will take more than winning the Dreamworks account to assure a happy ending.

QWERTY comparison: BlackBerry Torch vs. Droid 2 vs. Epic 4G






Although most of the momentum in the smartphone world is happening around touchscreen devices, there are still plenty of people — especially many business professionals — who want a hardware keyboard.

There are three new high-end smartphones with hardware QWERTYs that have recently hit the market and I have been doing an old fashioned showdown with all three of them. I’ve put together a set of photos comparing the three devices and I’ve done a quick evaluation of each of the three keyboards.

Photo gallery

See a photo comparison of the three: Keyboard showdown: Droid 2 vs. Epic 4G vs. BlackBerry Torch.

Samsung Epic 4G


The Epic 4G has the most versatile keyboard of the three. It has a dedicated row for numbers and several special keys (search, back, home, smiley, etc.). The keys themselves are chicklet-style, reminiscent of Apple Macbooks and Sony Vaio laptops.

BlackBerry Torch 9800

The BlackBerry Torch has the traditional BlackBerry qwerty that has been around on high-end devices since the BlackBerry 8800 World Edition. It is a top quality keyboard with a nice weight to it and typically has a low error rate. Those who are already familiar with BlackBerry will love the standard feel.

Motorola Droid 2

The Droid 2 keyboard is the worst of the three. The keys are too flat and non-distinct and there are no special keys other than the arrow keys. The Droid 2 keyboard is better than the original Droid keyboard, but that’s not saying much. Most of the people I know who have a Droid bought it at least partly because of the physical keyboard. But those same people report that 90% of the time they never use it, since it’s so bad.

Agile drivers for new project management tools

If you believe in the concept of the tipping point, the migration to agile software development has tipped: according to the latest report on Agile Development Tools by Forrester Research, 56% of the survey respondents use agile or iterative development methods, in contrast to the 13% who profess to using a waterfall approach (the remainder use no formal methodology). These statistics illustrate that agile has gone from a radical, fringe technique to the dominant methodology since the Agile Manifesto was first published back in 2001.

One of the central tenets of the Agile Manifesto is the statement that “We value processes and tools, but we value individuals and interactions more.” Like many of the bold statements that make up the manifesto, this phrase has been misinterpreted to mean that agile developers must reject tools and automation of the development process, and that the use of any tool to assist in tracking progress or managing tasks, other than a packet of sticky notes affixed to the wall, automatically brands a team as “un-agile.”

Like the extreme interpretation of the comments in the Agile Manifesto regarding documentation, which has led to the myth that agile developers aren’t allowed to use pens or paper, this extreme interpretation misses the point. When agile developers decide how much documentation to write, or what sorts of tools to apply, the guiding principle is the same: What’s the “barely sufficient” solution that fulfills the functional need without adding unnecessary process and without sacrificing the ideals of simplicity and flow that guide agile development?
The tools define the project

It’s also important to remember that, for many development teams, as well as their clients and managers, the tools define the project. When I manage projects as a contract PM, my clients often want to see the work breakdown structure, or project plan, usually constructed in Microsoft Project, to demonstrate to them that I’ve thought through the tasks required to deliver the project results. They look for the Gantt chart to see the effort defined against a timeline, and they expect to see written change control documents when the scope evolves. For many managers, the most difficult element of migrating to agile is acclimating to new techniques for planning work, tracking progress, and integrating changes into the project scope. Often, the client’s attitude is “I don’t care what you developers do in your ‘war room’; be as agile as you want in your development activities, but you still need to show me a project plan and a Gantt chart to reassure me that you’re on track.”
The distinction between operational and experimental projects

In addition, it’s important to refer to the agile concepts of experimentation and uniqueness. As Jim Highsmith repeatedly reminds us, agile development is, at its core, focused on fitting the process to the type of project at hand. Operational projects that incorporate existing project plans and produce repetitive results, like the building of the 20th semiconductor fabrication plant, probably don’t require or benefit from agile methods; the development of a new silicon chip design does.

This distinction between operational and experimental projects is key. When we build software or products in an experimental mode, the actual coding or development can’t be automated or “tooled”; only human beings can go through the iterative, speculative process of trying out ideas, accepting or rejecting them, and following their thread to the next unique idea. That’s not to say, however, that every element of the agile development process is unique. Every project sill requires design, testing, integration, and monitoring, and these common elements can be automated, and in fact are being automated by vendors eager to produce the corollary of Microsoft Project for the agile world.
Automation in agile development

In fact, contrary to the myth that agile teams reject tools, agile development calls out for automation due to some of its inherent characteristics. Because stories or features are often decomposed into tasks, and frequently parsed out to different team members for development, the ability to keep the entire team informed on the progress against task and feature development is critical. In distributed teams, this obviously becomes even more important. Frequent testing and integration also drives the need for current information that is easily accessed by team members, so they can understand the status of all elements of the product at any moment. As we discussed earlier, during the migration to agile, managers are often disconcerted by the change in status reporting tools; new tools that can reassure them that they can still keep their finger on the pulse of development are key reassurance factors that go a long way towards easing the transition.
Speed and agility are factors

Both speed and agility also cry out for new tools. In an agile project, in which features, sequences, and tasks can change rapidly, and items can be added or subtracted from the scope daily (or even hourly), the old Gantt-chart-on-the-wall technique won’t work, unless you want to assign someone to put it up and tear it down hourly. Tools that allow for frequent transitions in the plan, and that can instantly indicate what the team has learned about what will or won’t work, are required.
Agile tools

As I noted in a previous TechRepublic column, Scrum and other agile practitioners use different techniques for tracking and reporting than traditional developers. The Product and Sprint Backlog, the Burndown Chart, the Change Report or Delta Table, and other tracking and monitoring tools are obvious candidates for automation; many vendors, from IBM and HP to newer entrants such as CollabNet, Rally Software, and VersionOne, are producing products that offer these capabilities, and more. CollabNet, for example, has a suite of tools that spans the gamut, from its free ScrumWorks Basic package (which offers the foundational capabilities to track and manage a product backlog) to its TeamForge product (which presents a complete team development environment designed for large-scale, distributed development efforts).
Conclusion

This article is not intended as a review of these tools, but instead to discuss the agile drivers and concepts that create the need for new project management tools in the first place. Forrester, in the report noted above, does a great job of evaluating the many vendor offerings in this space and guiding readers through the selection process. Many of these vendors offer free trial versions, or limited-time trials of their complete packages.

Agile teams need to discard the myths that fool us into believing that we must either adapt the old tools or use only whiteboards and Post-it notes to control our project efforts. The agile tool space is populated with mature, capable tools that can make the work of agile development more efficient and visible, without violating the fundamental tenets of agility.

Resume pet peeves you may not know about

We all know about the more common pet peeves recruiters have with resumes — poor grammar, misspellings — but here are a couple more that you may not have thought of. These came from a survey of technical recruiters and hiring managers on about.com.

* Writing the resume or cover letter in the third person. I have actually never even thought anyone would do this, but apparently it’s common enough to become a pet peeve. And it’s also kind of creepy.
* Using tiny fonts. A lot of people just can’t stand the thought of a one- or two-page resume, which is the recommended length, so they employ a microscopic font so they can still mention every technology they’ve ever laid a hand on. If a recruiter has to employ a magnifying glass to read your resume, you’re already losing points.
* Listing references but not professional ones. We know your brother-in-law thinks the world of you, but unless he’s Bill Gates, it really doesn’t carry a lot of weight for a recruiter.
* Attaching a resume with an obscure, significant-only-to-you name. Naming your resume named with the current date is not smart. Give it your name.
* Writing the resume using table formats (columns). Think in terms of what will be most accessible to the recruiter.
* Making the resume too long. Okay, this one isn’t new to readers of this blog, but I thought I’d mention that it came up in the survey just to reinforce my advice. I can’t say it any clearer–a recruiter only needs to see the skills you have that fit the job. He or she is not interested in the evolution of your technical development. You can mention that in the interview.

Create a defragmentation scheduled task in Windows Server 2008

Windows Server 2008 introduces a new extension of Group Policy that allows scheduled tasks to be deployed via a Group Policy Object (GPO). In the case of disk defragmentation, you can configure a GPO to run a defragmentation task. We have to borrow the scripting options from the built-in task, specifically the %windir%\system32\defrag.exe command coupled with the -c parameter.

You configure scheduled tasks through GPOs in the Computer Configuration | Preferences | Control Panel Settings | Scheduled Tasks section of the Group Policy editor (Figure A).


Figure A


Once the scheduled task is made via the GPO, it is located in a different area of the Task Scheduler (taskschd.msc) snap-in compared to the built-in task from the default installation. The GPO tasks will appear above the Microsoft folder in the root of the Task Scheduler Library folder (Figure B).

Figure B


Pushing a defrag task out with a GPO, on the one hand, is good because it ensures a consistent configuration deployed quickly; on the other hand, it may be risky to launch a series of tasks that would send a burst of I/O on the storage system. If this mechanism is selected, here are guidelines for deploying GPOs for defrag:


  • Go very granular in the organizational unit structure with different schedules to distribute the workload.
  • Consider running this on client PCs.
  • Be aware of multiple iterations of defragmentation where SANs are in use.
  • Consider consolidated effects of virtual machines.

Do you feel that defragmentation tasks should be centrally managed via Group Policy or local on each server? Share your comments in the discussion.

Thursday, August 19, 2010

Is primary storage deduplication going entirely mainstream?

Storage vendors are very passionate about the topic of storage deduplication. Vendors who offer storage deduplication, tout it, while vendors who don’t offer it, discredit it.

Before I go too far, let’s agree on what we are talking about so we can disagree later: When I say primary storage, I’m referring to non-backup storage, so these are the logical unit numbers (LUNs) that are providing storage resource for file servers, databases, and virtual machines. When I say deduplication, I’m referring to a technology (blocks, pages, or files) that removes the storage consumption of like areas on disk.

Many solutions offer deduplication on storage other than primary storage; this is frequently on backup tiers or with solutions such as a virtual tape library (VTL). Software solutions can also add deduplication within a file, but are not represented on the storage area network (SAN) as a feature of the storage system.

NetApp is the one mainstream storage vendor that has been providing primary storage deduplication for some time. I am convinced that the company will soon have some competition in the space on the heels of Dell’s acquisition of Ocarina Networks. Ocarina is an interesting solution that provides a combination solution of compression and deduplication. In fact, I had a chance to visit with Ocarina last year, and I was impressed by what I saw.

The question becomes: Is primary deduplication that big of a deal? My answer is yes. NetApp even has a virtualization guarantee program that assures 50% less storage usage for virtualization implementations. Where does Dell’s acquisition of Ocarina come into play? I believe it fits quite nicely when we consider Dell’s acquisition of EqualLogic in 2007.

What does this mean for the consumer? Simply speaking, if more solutions are available in primary storage deduplication from storage vendors we’ll see the competition adjust. Does this ensure that other storage vendors will rush into a primary storage deduplication offering? Not necessarily, but among mainstream vendors there will be competition for the feature.

Do you feel that primary storage deduplication is a necessity today to get the most out of your storage dollar? Share your comments below.

Six things laptops can learn from the iPad

Apple sold 3.3 million iPads in Q2, the product’s first quarter on the market. That was more than the number of MacBook laptops (2.5 million) that the company sold in Q2. Plus, the two products combined catapulted Apple from No. 7 in the global notebook market to No. 3.

Meanwhile, all of the other top five notebook vendors saw their growth slow during the same period, suggesting that the iPad cut into their sales. Will these iPad numbers be a short-term bump based on the unparalleled hype and anticipation for the product, or will it be amplified even further during the back-to-school and holiday seasons? That will be one of the most interesting trends to watch during the second half of 2010.

Nevertheless, the iPad has already sold enough units to alarm laptop makers and make them contemplate how to react. Nearly all of them are already working on competing tablets, powered by Google Android in most cases.

But, laptop makers should also look at the factors that are triggering the iPad’s popularity and consider how some of those factors could be co-opted into notebooks. Here are the top six:

1. Battery life is a killer feature

When Apple first shared the technical specs of the iPad and claimed 10 hours of battery life, I rolled my eyes. Published battery life numbers rarely hold up in the real world. However, the iPad actually exceeded expectations. I’ve easily milked 11-12 hours of battery life out of the iPad, and others such as Walt Mossberg of The Wall Street Journal have reported the same thing.


This kind of battery performance is huge for business professionals because it untethers them from a charger for an entire business day. Whether it’s for a full day of meetings or a cross-country flight, they can focus on their work without having to worry about finding a place to plug in at some point. I’ve see several business users state that this was their primary incentive for using the iPad.

2. Instant On changes the equation

The fact that you can simply click the iPad’s power button and have it instantly awake from its sleep state and be ready to pull up a Web page, glance at a calendar, or access an email is another major plus. Compare that to dragging your laptop into a conference room, for example. Even the best laptops with Windows, Mac, or Linux take about 30 seconds to boot and then you have to log in and wait some more until the OS is ready.

You don’t always want to fire up your laptop at the beginning of a meeting and leave it on because then you could get distracted or appear as if you’re not paying attention to the other people in the room. But, if something comes up and you want to quickly access your information, then you want it instantaneously so that you don’t have to tell the other people in the room, “Hang on for a second while I pull up that data,” which can break the flow of the conversation and even make you look unprepared.

While some laptops can accomplish something similar by quickly going in and out of a sleep state when you flip the lid open or closed, this can regularly cause problems with wireless networking and other basic functionality, and tends not to be as quick as the iPad.

3. Centralize the software

The feature that made the iPad infinitely more useful for lots of different tasks is its massive platform of third party applications, which are all available in a central repository (that’s the key feature) — the Apple App Store . The App Store also serves another valuable function: All updates for iPad apps are handled there as well.

Contrast that with laptops where you can get software preloaded on your compter, buy software shrink-wrapped, or download it from the Internet, and then nearly all of the different programs have their own software updaters. It’s a much more complicated and confusing process for the average user. There’s no reason why a desktop/laptop OS platform can’t have an app store. I recently noted that Ubuntu Linux 10.04 offers a nice step in that direction.


4. Simple interfaces are best

There’s a classic children’s book called Simple Pictures Are Best where a photographer is trying to do a family portrait and the family keeps wanting to try crazy things and add more stuff to the portrait and the photographer keeps repeating time and time again, “Simple pictures are best.”

It’s the same with a user interface. There’s a natural tendency to want to keep trying to toss in more things  to satisfy lots of different use cases. But, the more discipline you can maintain, the better the UI will be. Since the iPad runs on Apple’s iOS (smartphone) operating system, it is extremely limited in many ways. However, those limitations also make it self-evident to most users because it requires little to no training. People can just point and tap their way through the apps and menus.

Software makers have been attemtping simplified versions of the traditional OS interface for years, from Microsoft Bob to Windows Media Center to Apple Front Row. None of them have worked very well. The question may be one of OS rather than UI. Could a thin, basic laptop run a smartphone OS? I expect that we’ll see several vendors try it in the year ahead.

5. Most users consume, not create

One of the biggest complaints about the iPad is that it offers a subpar experience for creating content. There’s no denying it, and frankly it’s one of the reasons that I personally don’t use the iPad very much. It’s mostly a reader of books, documents, and files for me, because when I go online I typically do a lot of content creation, from writing articles on TechRepublic to posting photos on Flickr to posting tech news updates on Twitter.


However, I’m not the average user. Even with the spread of social networking, which is much more interactive, the 90-9-1 principle still applies across most of the Web. That means only 1% of users are actual content creators, while 9% are commenters and modifiers, and the remaining 90% are simply readers or  consumers. The iPad is a great device for content consumers. But, it’s not very good for the creators and modifiers, who are both strong candidates to stick with today’s laptop form factors, which are perfect for people who type a lot and manipulate content.

That leaves a huge market that could be easy pickings for the iPad. As a result, vendors need to think about ways to make laptops better content consumption devices.

6. Size matters

Being able to carry the iPad without a laptop bag is another huge plus. The power adapter is even small enough to roll up and put in a pocket, a jacket, or a purse. The diminutive size of the iPad can make business professionals feel as if they are traveling very light, especially if they’re used to lugging a laptop bag that included the laptop and a bunch of accessories to support it. On a plane, working with the iPad on a tray table is a much more roomy experience than trying to use most laptops.

The lightweight nature of the iPad can also make it more likely that professionals will carry it into a conference room or into someone else’s office to show a document or a Web page, for example.

There are plenty of ultraportable laptops on the market from virtually every vendor, but these tend to be specialty machines and are often higher priced. In light of the iPad’s success, vendors might want to rethink their ultraportable strategy by looking to make these devices smaller, less expensive, and better on battery life. They may also consider experimenting with a mobile OS such as Android on some of these devices.

HR problem waiting to happen: The perpetual volunteer

Managers love the ever-giving over-achiever — if that person can get the job done. The problem lies with the person who volunteers for everything but then never makes things actually happen. I’ll talk about both types here.

Some employees volunteer for new duties and are great at making things happen and producing results. And too many managers will lean on this person excessively. (If my email and the discussion posts are any example, most managers lean heavily on these people.)

Some people just get charged up by being extremely busy and challenged, but there are those who only do the extra stuff out of fear of repercussions. I know this is asking the impossible, but managers need to wise up and figure out which is which. If you have a person on your team who consistently takes on any new duty, you need to make sure that there isn’t some underlying issue that drives that. Otherwise, that person could one day wind up in the nearest clock tower with a sniper rifle and you in the scope. (I just had a mental image of that scene, but with a manager on the ground yelling into a bull-horn, “Do you have your laptop with you?”)
The perpetual volunteer

Lazy managers love a bottomless well of productivity. That is, until they see that things aren’t actually getting done. Often the person who is the first to raise his hand to volunteer has no idea of how to do the assigned task. He is a little delusional as to his own capabilities or time availability. So all of the things that are dependent on the tasks he volunteered for have been pushed back and then the manager has a real and ongoing mess to clean up.

So there is an in-between. You want to be seen as dependable and flexible but you don’t want to take on duties that you can’t possibly fulfill. Also, you want to avoid being pigeon-holed by your manager as the place where all little responsibilities go, because, whether consciously or unconsciously, that manager will use you up until you’re as tired and overworked as Lindsay Lohan’s probation officer.
How to get out from the hole

So what do you do if you’ve gradually and almost imperceptibly become the receptacle for all extra duties? I”m being optimistic here, but maybe your boss really doesn’t realize what a burden the extra projects are for you. In that case, you should have a chat. If not, I would try gradually weaning the boss away from depending solely on you. The next time a project comes up, say, “I’m covered up right now. I don’t think I could get to it in the time you need it.” That might be a wake-up call for your boss.

But then, it might not be. Your boss may turn on the Mafia death stare or threaten you with firing, if he or she is a real jerk. If the person is really that unreasonable, then it’s time to look for another job.

Throwing down the gauntlet: Prove that Linux is not user-friendly

I’ve been covering Linux and open source since 1999 and using Linux exclusively since around 1996. I’d say that earns me some credit - at least in certain circles. Through those years I’ve pretty much seen every trend, every success, and every failure. I’ve also evolved from and through every stage of Linux user. From blind fan-boy, to staunch advocate, to mentor, to guru (some would say), and everything in between. During said time I have tried very hard to remain PC and let the criticism just roll off my back. I have said some things only to retract them and held back certain opinions out of fear I might offend.

Not this time.

Recently I have had a lot of people comment (on this forum and other forums) that Linux isn’t user friendly, that Linux will never make it to the average user’s desktop, that “Windows rulez and Linux droolz”. Among most of those detractors hardly a one will offer a solid reason to back up their statement. So this time I am throwing down the gauntlet of challenge to say “prove to me that Linux is not user friendly”.

Of course, this must begin with a definition of user friendly. From my perspective, in order to be user friendly, an operating system must be usable. It must be such that any level of user could sit down and take care of the average daily tasks they are charged with without issue. It must have a graphical environment that is stable, pleasing to brain and eye, as well as be intuitive so those average daily tasks are made even simpler. But what are the average daily tasks? According to the Digest of Education Statistics, the average tasks (ranked in order) are:

* Word processing
* Connect to internet
* Email
* Spreadsheets/Databases
* Graphics designs
* School assignments
* Household records/finances
* Games

Notice Games is last. A good portion of people will proclaim the reason why Linux will not succeed is Games. Well that may be true for the gamer, but the gamer is not the average user. Gaming, in fact, ranks at the bottom of average tasks done on a computer. Of course the study does not discern or define what Games is. Games could be Solitaire or World Of Warcraft. Either way, Games alone does not a user friendly operating system make. Now, judging from that study let’s see which of the above can Linux do:

* Word processing: OpenOffice handles this, so check.
* Connect to internet: How many browsers does Linux have? At last count I have eight installed on my machine and that doesn’t include all of them. So another check.
* Email: Another big check thanks to Evolution, Claws-Mail, Thunderbird, etc.
* Spreadsheets/Databases: OpenOffice Calc and Base or MySQL suit your fancy for a big check here.
* School assignments: Seeing as how most of these are done via word processing…check.
* Household records/finances: GnuCash is just as powerful as Quicken, so check.
* Games: Linux has plenty of games and, thanks to Cedega, it can even play Windows games…check and check.

So…Linux can handle the tasks of average users in a user-friendly way.

But let’s examine something else that has ruffled my feathers on a number of occasions. Over the last five years Linux has consistently grown more than any other operating system. It seems to me that the majority of detractors haven’t used Linux since the kernel turned 2.6. I hear such exclamations as, “You have to write your own device drivers!” In over 12 years of usage I have never had to write my own device drivers…not even back in the days of Red Hat 4 and Caldera Open LInux 1! That’s pure ignorance speaking. If you’ve not used a recent distribution release, you are missing out on a LOT of evolution and growth. Let’s take a look at some of the examples:

Folder sharing: In recent releases, both GNOME and KDE have evolved in such a way that file/folder sharing has become even more simple than it is in either Windows or OS X. No more editing of Samba configuration files, no more having to manually install and run Samba…period. It all is just there and it all just works.

USB: The USB sub-system on Linux has become incredibly user-friendly…on par or exceeding Windows and OS X. You think this to not even be a factor when considering user-friendliness, but not 5 or so years ago Linux users had to manually mount and unmount USB devices. I remember well those days and am glad they are a thing of the past.

Graphics: Linux has taken huge strides forward in this area. Gone is the need to edit an xorg.conf file. Linux now just recognizes your hardware and uses it. On occasion you might have to install a proprietary driver in order to get the most out of your hardware, but generally speaking, it works amazingly well.

Printing: Take a look at Fedora 13 to see how well Linux handles printing now. Windows doesn’t hold a candle to what Linux can do with printing. And that older printer that you love that Windows 7 doesn’t support? Linux will continue to support it. (NOTE: I ran into an irate client this past week because we migrated them to Windows 7 only to find out their favorite multi-function laser printer wouldn’t work under Windows 7. That same printer works fine in Linux.)

Applications: Pound for pound, Linux is on par with both Windows and OS X with most every category of application. The only glaring category that Linux has yet to catch up on is games (but Cedega helps Linux out there). Current iterations of OpenOffice are far more user-friendly than MS Office (thanks to MS Office adopting that ridiculous ribbon interface). Evolution practically mimics Outlook (minus those pesky PST files that hinder more than help Outlook’s functionality).

This list could go on and on.

On a daily basis, I work with both Windows and Linux. I have to know how both work and how to fix them when they don’t. Thing is, Linux never breaks. Linux gets deployed and we never hear about it again. Windows, on the other hand, is a daily struggle to keep running due to virus/malware infections, printing issues, disconnected mapped drives, VPN problems, and more.

I ask you - how is that user-friendly? How is a constant battle with viruses and malware user friendly? When the user spends more time cleaning and disinfecting than they do working that user is not being productive. When the company is spending more money keeping a machine running than they spent on the machine itself - that is not user friendly.

So tell me, all you who would proclaim that Linux will never succeed on the desktop, what is it about Linux that makes you think it is not user friendly? And exactly why do you think Linux can not make it on the desktop of the average American citizen (we have to discount the majority of the world because many of those are already using Linux on their desktop)? And I do not want to hear cries of “Market share!” because that is simply not an answer to the question.

Here’s what I want: I want to hear intelligent, legitimate reasons why you think Linux can or can not make it on the average user’s desktop and what is it about Linux that is NOT user friendly.

NOTE: In order to answer the above questions you MUST have used a distribution of Linux that has been released in the last year. Anything prior to that is like saying Windows is a horrible operating system and your only basis of comparison is Windows 98.

Here’s your chance people. Lay it down. Tell the readers exactly why Linux can or can not make it. The gauntlet has been thrown down….bring it!

Bypass a $200 biometric lock with a paperclip

Wired reports that the “gross insecurity” of high-tech locks has been exposed. Several different expensive, modern locks with advanced design concepts proved ineffective against the efforts of Marc Weber Tobias, Toby Bluzmanis, and Matt Fiddler, who have been exposing the poor security design of physical locks at DefCon for years.

The most egregious example appears to be the $200 Biolock Model 333. It provides a fingerprint reader as its main selling point, but also features a remote for locking and unlocking and a physical key in case the fingerprint reader fails to unlock the door for its user. The whole biometric selling point was trivially bypassed, however, by simply inserting a straightened paper clip into the keyhole. The sort of lockpicking practiced by locksmiths (and private investigators in the world of TV shows and movies) is not required; the whole process simply involves pushing the paperclip into the keyhole and turning the handle.

The Wired article offers a video of the technique, demonstrated by the security researchers presenting their findings at this year’s DefCon. They describe the lock’s vulnerability as a “perfect example of insecurity engineering”.

Another example involves a Kwikset smartkey deadbolt system that can be trivially cracked with a screwdriver. Kwikset has stated that the lock has “passed the most stringent lock-picking standard.” Marc Weber Tobias pointed out that adherence to standards is not enough when it comes to security. The very nature of many problems we face is defined by the unexpected and unpredictable. If we do not expect it and cannot predict it, we certainly cannot standardize it.

A small safe intended for residential use, a battery operated electronic lock operated by an RFID key, and an electro-mechanical lock that keeps an audit log — from AMSEC, KABA, and iLock respectively — were also found to suffer weaknesses in their security functionality.

In addition to the Biolock video, there are videos within the online Wired article showing demonstrations of weaknesses of the other locks and safe as well. All told, the article itself gives a quick and easy glimpse into the world of poor physical security design, and the videos provide a concrete demonstration of the techniques involved. More than a mere warning to avoid poorly designed security devices, these examples should serve as an object lesson in the dangers of uninformed, improperly tested, and inexpert security design.

Five tips for mentoring entry-level developers

One of my TechRepublic polls covered the topic of why we hire entry-level programmers. According to the poll results, more than half of the respondents hire entry-level programmers so they can mentor them into the type of programmer they need. Unfortunately, companies often don’t have anyone with the time to properly mentor an intern.

If your organization is starting or revamping a mentorship program, the following tips can help. But it’s important to note that not every senior developer makes a good mentor, and there’s no shame in knowing your limitations. If you don’t think you can fully commit to being a good mentor, or you don’t think you have the necessary skills or traits to be one, say something. It’s better to admit that you aren’t cut out for the task than to force yourself to do it and waste time and probably alienate a promising new employee.

1: Make mentoring a priority

I think the key ingredient in a successful mentoring relationship is giving the relationship priority above anything other than an emergency. The inability to give the relationship priority is what makes true mentoring scenarios so rare. If you don’t make the mentorship a priority, new hires quickly sense they’re not important. They also quickly figure out that when they go to you for help, they’re slowing you down from attending to your “real” priorities. The result? They don’t come to you for help, and they try to do things on their own. Basically, you’re no longer their mentor.
2: Have a road map

I’ve seen a number of mentoring programs sink because there is no plan. Someone is hired, and a more experienced developer is assigned to show that person the ropes. The experienced developer wasn’t told about this new mentoring role until 9:05 AM on the new hire’s first day. The would-be mentor takes the new hire on a tour of the building and introduces him or her to a few other teams — and that’s the extent of “the ropes.” The only thing the new employee usually learns is where to find the kitchen. You need to have a game plan with set goals (for the new hire and for the mentor) and a list of topics to cover; otherwise, you’ll both feel lost and give up before you even start.
3: Be tolerant of mistakes

Working with entry-level developers can be frustrating. They are not familiar with writing code in a real-world environment with version control, unit tests, and automated build tools. Also, they may have been taught outdated habits by a professor who last worked on actual code in 1987. Often, entry-level developers don’t realize that the way they were taught to approach a problem may not be the only choice. But if your reaction to mistakes is to treat the new developers like they’re stupid or to blame (even if they are being stupid or are truly at fault), they probably won’t respond well and won’t be working with you much longer.
4: Assign appropriate projects

One of the worst things you can do is throw entry-level programmers at an extremely complex project, forcing them to sink or swim. Chances are, they’ll sink. Even worse, they’ll add this project to their resume and run out of there as fast as they can just to get away from you. On the other hand, don’t create busywork for them. Let them work on nagging issues in current products or internal projects you never seem to have time to address. Once you gain confidence about what they can accomplish, you can assign a more difficult project.
5. Give and accept feedback

You can’t successfully navigate a ship in the middle of an ocean without a compass. Likewise, new employees will not achieve the goal of becoming a productive member of the team without knowing where they’ve been and where they’re going. This means you need to give feedback on a regular basis, and the feedback needs to be appropriate. For instance, being sarcastic to someone who made an honest mistake is not helpful. Feedback has to be a two-way street as well. You need to listen to find out what their concerns and questions are, and address them.
Rewarding experiences

If you’re considering being a mentor, these relationships can be very rewarding. I hope these tips will help you the next time an entry-level developer is assigned to your department.

Review: RealVNC for remote control software

Being able to gain remote access to a machine is often crucial to an administrators’ job. Whether it is remotely administering a machine or taking control of an end-user’s machine to resolve various issues with their machines. This is actually not a challenging step with so many tools like LogMeIn and TeamViewer. But if you are one of those that doesn’t like to use third-party tools, or want to have more control over how this connection is made, you might want to venture into the realm of Remote Desktop Protocol (RDP) or Virtual Network Computing (VNC).

One such tool for this task is RealVNC. RealVNC was created by the original develpers of VNC, so you know you can trust the tool to work — and work well. This tool will allow you to easily take control of your remote desktops. But is it the tool that will perfectly suit your needs? Let’s take a look and find out.

Requirements

RealVNC supports Windows, Linux, Mac, UNIX

NOTE: Linux version requires libstdc++-libc6.

Who’s it for?


RealVNC is not for everyone. RealVNC requires a much deeper understanding of both networking and computers, in general. RealVNC is perfect for administrators needing to gain access to machines from any type of operating system. With RealVNC you are not locked into using only one tool. You can connect to a Windows-installed RealVNC server from any VNC client on any supported operating system.

What problem does it solve?

RealVNC allows administrators to gain remote access to their (or end users’) machines from anywhere and from any supported operating system. And unlike some other VNC tools, RealVNC comes complete with both server and client, so you can install everything you need in one package. With this tool you can handle remote administration without having to worry about third-party tools. And with the enterprise-level application, you can also chat with your end user so you don’t have to tie up the phone while trying to administer support to a client.

Key features

NOTE: Not all features are in all versions.

  • Multiple OS support
  • 2048 RSA Server Authentication
  • 128-bit AES session encryption
  • Printing
  • One-port HTTP and VNC
  • HTTP proxy support
  • Dedicated help and support channel
  • File transfer
  • Address book
  • Built-in chat
  • Desktop scaling
  • Platform-native authentication
  • Deployment tools (Windows only)


Configuration window

From the configuration window you set up the type of authentication you need. It is best to ensure authentication is used, otherwise anyone will have access to a VNC connection on your machine.

What’s wrong?

The biggest issue with RealVNC is the learning curve. You are not dealing with the standard remote access tools. With RealVNC you have a server and a client component. In order to connect to the machine you will have to have the server running and properly configured. This is not something just any user can do. So if you are an administrator hoping to use RealVNC for remote support, you better have initial access to that machine to get the server up and running or you will have to deal with helping an end user to get a server started and correctly configured.

Bottom line for business

With RealVNC it boils down to this - if you need a powerful means to remotely administer servers (or desktop machines) you can’t go wrong with this tool. If, on the other hand, you need a very user-friendly tool that any user can start and give you access to their machines, you should look for another solution. RealVNC is not new-user friendly by any stretch of the imagination. Does that mean it’s difficult to use? Not if you are an administrator.

Competitive products

    TightVNC UltraVNC

    User rating

    Have you deployed RealVNC? If so, how well would you recommend this tool to another user or administrator?

Five tips for preventing user screw-ups

Let’s face it, every one screws up. From the uppermost IT manager to the most inexperienced end user, no one is immune from making mistakes. But there are certainly ways of preventing some of them from happening. Here are some of the best measures you can take to keep your users from fubaring their systems.
1: Schedule tasks

You wouldn’t believe how much scheduling various tasks can help prevent issues. The tasks you should definitely schedule are:

* Virus definition updates
* Virus scans
* Malware definition updates
* Malware scans
* Defragmenting
* Disk cleanup
* Data backup

And just to be on the paranoid side, you should schedule all end users to change their password every 30 days. Scheduling these tasks eliminates the risk of users overlooking them and leaving their PCs vulnerable to various issues.
2: Keep a tight rein on permissions

Unless you can think of a solid reason to make an end user a local administrator, don’t. I understand this can be a real hassle in certain situations. And particular applications might require local admin rights just to run. But unless it is absolutely necessary… it is not at all necessary. The less your end users CAN do, the less they WILL do. The biggest issue with this setup is that you will come off with some serious control issues. But in the interest of cost cutting and/or sanity saving, keeping your end users from running tasks that should be run by an administrator can be a big help. Be warned: This will cause you a lot of running to and from offices entering admin credentials. To that end, make sure you can remote into those end-user machines quickly.
3: Preempt password resets

This one might seem overly elementary, and you will certainly think that it is not your responsibility. However… Keep an encrypted spreadsheet (or encrypted text file) with updated user passwords. Why? Your users ARE going to forget their passwords. You can count on it. Instead of your having to go back to the Active Directory user manager and reset their passwords, just keep an updated file with all the passwords in it. That way, all you have to do is a quick lookup. Just remember to encrypt that file so only you can see it.
4: Don’t sacrifice security for usability

As annoying as Windows 7’s UAC is, it is not without purpose. In fact, that annoying feature is an integral part of the Windows 7 security mechanism. Many people disable UAC to get around that bothersome popup. That might be fine on an admin’s machine (not a server, of course). But with end users, who will be trying to download and install the strangest, must unsafe tools imaginable, you do not want this happening without some warnings being passed to them. With Windows Vista, UAC was nothing more than a serious annoyance. Windows 7 has gone a long way to actually make the UAC useful. So do not disable this feature.
5: Provide some basic training

Don’t just throw your end users to the wolves without a little preparation. You can teach them a few simple things that will help you in the long run. For example, most techs take for granted what does what on a computer. But how many times have you told users to open up a browser, and they had no idea what you were talking about? Teach them what a browser is, what office tools to do what, what Outlook can do, what keyboard shortcuts are, etc. And don’t even presume to think that an end user knows what it means to safely turn off a computer. You tell some users to shut down their computer and they will simply reach for that power button. And just like that, you have possible data loss on your hands. Make sure all of your end users know the proper way to shut off their machine. This is especially true for your mobile users.

Brainstorm project solutions with MindView mind-mapping software

Read the five-step process for using the Nominal Group Technique and MindView software to facilitate brainstorming sessions and then export data into Excel.

——————————————————————————————-

In unstructured brainstorming sessions, there is a tendency for some project team members to be overbearing, while others may be shy about participating. IT project managers can use the Nominal Group Technique (NGT) as a brainstorming tool to address this type of problem. The NGT eliminates group bias, group think, or any social influence. Figure A depicts NGT in a mind-map format.

Figure A



The NGT is a five-step process: generate ideas, collect ideas, review ideas, prioritize ideas, and record results. When this technique is used with the MatchWare MindView mind-mapping software, project teams can conduct a brainstorming session and then record and prioritize the results using MindView’s unique integration features with Microsoft Excel. Follow this tutorial to learn how to conduct the NGT with MindView.

Step 1: Generate ideas

In the brainstorming session, you should ask each team member to silently write down solutions to the problem; this process eliminates bias or influence from other team members. Be sure to establish a time limit to facilitate this session.

Step 2: Collect ideas

The next step is to collect the ideas and record them in MindView. If you want the answers to be anonymous, you should ask team members to hand in their solutions, and then you can build the mind map in MindView using the cards. I prefer to have each team member share one idea, and then I record each solution in MindView (Figure B).

To record these ideas in MindView, follow these steps:


1. Create a new mind-map file.

2. Click the center node and type a new name.

3. Click the Insert key to insert a new node.

4. Change the name of the node to the first idea.

Repeat these steps until all the ideas are recorded.

Figure B


Step 3: Review ideas


Now you need to review the ideas for clarification. The team members should not debate the ideas but simply clarify an idea’s meaning for common understanding. During this process, if you find duplicate or similar ideas, here’s how you can group those ideas in MindView (Figure C):

1. Click a node.

2. Drag the node to another node, and it will become a subnode to the idea.

3. You may need to insert a new node, rename it, and move the other nodes as sub nodes.

Figure C


Step 4: Prioritize ideas


Each team member should prioritize a subset of the ideas. According to Kenneth Rose in his book Project Quality Management: Why, What, and How, he suggests using these prioritization rules:

  • 20 ideas : Prioritize four ideas
  • 21 to 35 ideas : Prioritize six ideas
  • 36 or more ideas : Prioritize eight ideas

The highest priority should receive a weight of 4; the second priority should receive a weight of 3; the third priority should receive a weight of 2; and the last priority should receive a weight of 1. You will enter these values into Excel to calculate a weighted priority. Since we’re using MindView, I can export the mind map directly into Excel and set up the exported map for a quick tally.

To export the mind map into Excel, follow these steps:

1. Click the green and black MindView logo in the upper left corner (Figure D).


2. Select Export | Microsoft Excel | Quick Excel Export.

Once the export is complete, click OK to open the file.

Figure D


The final exported file is displayed in Figure E.

Figure E

Exported MindView map

The next step is to prepare the Excel file to record the results.

Step 5: Record results

Assuming we had 10 people in the brainstorming session, I inserted 10 columns to record each participant’s results. I then inserted a Total Column and inserted a formula to summarize each row. When recording the results, you can ask each person to submit their top four prioritizations (anonymously or in a round-robin) and record them in the matrix (Figure F).

Figure F


When all the values are recorded, the Total column can be sorted from Largest to Smallest to identify the top four priorities (Figure G). The value of this technique is it promotes unbiased brainstorming with individual voting. By calculating the prioritization, the team can quickly establish the top solutions to a given problem.


Figure G

Prioritized list

Summary

MindView is one of the few mind-mapping tools that exports directly to Excel, and it also has several useful built-in project management features. You’ll find plenty of applications for mind mapping and project management with MindView.

Another benefit of using the software is that it is simple for remote team members to participate in this process because the entire exercise can be conducted using a laptop and a web conferencing solution.

If you are interested in exploring mind mapping for business, try MindView’s free 30-day trial. If you decide to download MindView, read my TechRepublic article about how you can build milestone charts faster with this mind-mapping software.