Friday 29 August 2014

What IT Leadership Can Learn From Manufacturing.

KansasSo many IT leaders realize their world is becoming a different place, and fast.  You can see it in their faces, hear it in the tone of their voices — almost feel the anxiety.

Like most leaders, they often go looking for examples from others who are adjusting well to their new realities. 
While there is plenty to learn from their peers, I usually counsel that understanding how modern manufacturing has changed (and continues to change!) provides ample lessons and tools about how to think about the modern IT organization.

One things for sure, there’s no going back to Kansas anytime soon …
A Wealth Of Parallels
At a fundamental level, manufacturing is about creating value-add around physical goods.  One could make an argument that IT (and computing in general) is about creating value-add around information.

GlobeBoth manufacturing and IT face somewhat similar constraints: the cost of capital, labor, limits in technology, unpredictable demand, long supply chains, and much more.

Both find themselves aggressively competing for their customers. 
Both are continually figuring out their unique value-add: what things do we do for ourselves, and what things do we leave to others to do more efficiently? 
Both have to continually re-invent their model, otherwise risk falling behind.

For those of you who work at companies with a strong manufacturing component, there’s a wealth of experience and perspective waiting to be tapped by the IT team.  For the rest of you, there is plenty of material readily available on how modern manufacturing is practiced.

I’d encourage you to invest the time.

A Brief History?

Thanks to Wikipedia, it’s not hard to get a sense of how manufacturing evolved.  It started with individual artisans — craftspeople — and then evolved into highly structured guilds.

GuildRemember that “guild” concept the next time you interact with your database, network or security team :)

The advent of better power sources and transportation changed manufacturing from a local industry to a global one where scale mattered.  Human hands gave way to increasing levels of automation.  The traditional guilds were replaced new models, and new skills.
All somewhat reminiscent of what the microprocessor, the internet and “cloud” is doing to enterprise IT.

Over the last few decades, the pendulum in some manufacturing sectors appears to have swung from mass efficiencies to mass customization: valuing flexibility, agility and responsiveness over ultimate efficiency.

RMS_schematicIf you’re curious, check out this short piece on Reconfigurable Manufacturing Systems, circa 1999.  The idea is simple: physical manufacturing assets should be under software control, and completely reconfigurable based on changing demands.

This should sound vaguely familiar to many of you …

3d_printedNo discussion would be complete without acknowledging the advent of 3D printing — transforming yet another labor and capital intensive component of manufacturing into something that is entirely under software control.

One could justifiably say that — when it comes to modern manufacturing — it’s quickly becoming all about the know-how that’s implemented in software.

Back To IT

Recently, I was reading an analyst’s survey finding that SDDC concepts— software-defined data center - had been strongly adopted by about a third of the participants.  The remainder either weren’t quite sure or saw themselves going in a different direction.   I'm not exactly sure what that different direction might be …

SDDCAs a VMware employee, you might think I would see the findings as potentially negative news.  Quite the opposite, I was gratified to see that a third of the participating senior IT leaders understood SDDC concepts and saw themselves moving in that direction.

To be fair, the concepts have only been around for a relatively short period, and the supporting technologies (beyond compute, that is) are now just entering the marketplace. 
Combine that reality that with the unavoidable fact that the entire IT (manufacturing?) organization has to be re-envisioned around how information services are sourced, produced and consumed in an SDDC model — and I’m impressed.

A Bigger Picture

Cloud_blueI’ve often argued that our society is quickly evolving to an information economy.  All businesses will be information businesses before long — if they’re not today.

Just as manufacturing played a central role in previous business models (and still does today), information and the supporting IT functions will continue to increase in prominence.

These IT factories will need new technology blueprints to be efficient, agile and responsive.  That’s what I see in SDDC — and I guess I’m not alone.

And there is plenty to be learned from how it’s done in the physical world.
 

Loose Your Data: Loose your business.

Crater
Another unpleasant aspect of our new “information economy”.

A promising young start-up (Code Spaces) was held up for ransom by an intruder who broke into their AWS account and took control.  The digital kidnapper wanted a payoff, or else …
A sad posting says that — basically — all their customer’s data is gone, and they’re done for.  That’s it.  There’s no coming back for them.  Not to mention the pain inflicted on their trusting customers.
In a not entirely-unrelated story, in the US, the IRS (the tax agency) is in serious hot water because they can’t produce emails in the context of a congressional investigation.  The excuse?  The emails were on a personal hard drive (??), which failed, and has long been disposed of.
While the IRS is not out of business (after all, they’re a government agency), they’re certainly seriously impacted by the incident, making doing business more difficult.  No, I’m not going to try and claim the same with my personal tax records …
And with every tragedy, there are lessons to learn.
Do You Have A REAL Backup?
Safe
IT professionals know that a REAL backup is one that’s completely separate and isolated from the original data source as many ways as possible: separated logically, separated physically, stored on different media technology, different access credentials, etc.

The more kinds of separation, the better the protection.
I have taken ridicule for this position before (e.g. people who consider a simple snap a backup), but I’ll stand my ground.  All those snapshots aren't doing Code Spaces much good now, are they?
If losing data permanently and irretrievably would be an unmitigated disaster, then extra precautions are needed.
What’s Changing
Burglar
What’s become popular recently is a new breed of “digital kidnapper” — someone who extorts ransom to avoid the loss of your data.

We all know (or should know) about the recent spate of malware that encrypts your personal hard drive.  If you derive your livelihood from your personal computer (as many of us do), this can be a life-altering experience.
If you didn’t have religion about real backups before, you’ll certainly have it now.
The Cloud Angle
Code Spaces appears to have run entirely on Amazon’s AWS — primary data, backups, etc.  In my book, that’s dangerous — if AWS has a bad day, you have an even worse day.  And everyone has a bad day, sooner or later.
Cloud_lock
All access was through their control panel. The bad guy got access, and he was in business. Not being deeply familiar with AWS, I’m now very curious about how access control is set up for AWS’ control panel.

An awful lot of valuable data is stored there — think of it as a huge bank — and one now has to ask questions to see if it could happen again, and what steps would be necessary to prevent that.
A related question: was there anyone at AWS they could have contacted to help out?  Amazon’s model is highly automated; when a customer has a crisis of this magnitude, I would guess they’re not set up to respond quickly, if at all.   The service did what it was designed to do.
In hindsight, if Code Spaces had been making simple lazy copies to anything else — a home computer, a server elsewhere, etc. — the effects of the attack could be somewhat mitigated.  They’d be in business, after a stretch.
That’s the value of a real backup: when something bad happens, you’re injured, but you’re not dead.
Shifting Tides
Papertrail4
Not all that long ago, most business processes ran on a combination of paper and digital.  If the computer lost data, you could always go back to paper records, and attempt to recreate things.

Not anymore.  There’s no paper trail.  Lose the data, it’s gone.  Although, in the case of the IRS, I bet those emails are somewhere :)
Information is the new wealth, the new repository of value.  That’s going to attract bad guys — if not for IP theft, then for ransom attempts.  
Just like you can get your bank account cleaned out, you can get your cloud account cleaned out — with similar disastrous impacts.
This is not a criticism of clouds, or AWS, or anything else — just that the world has changed, and we must think and act differently to protect our information.

Policy Based IT: The Next IT Frontier.

Next_frontier
Several years ago, it became clear to me that the next aspirational model for enterprise IT was “IT as a Service”, or ITaaS.    

At its core was a simple yet powerful idea: that the core IT operational model should be refashioned around the convenient consumption of IT services.  
Under the ITaaS model, most everything IT does is now presented as a variable service, marketed to users, with supply driven by the resulting demand. 
IT becomes the internal service provider of choice.
Now, several years later, that once-controversial idea has clearly grown deep roots, with many examples of progressive IT organizations embracing this perspective.   Some have made the transition, some are mid-journey, others have yet to begin.  The IT world has moved forward.
So, it’s fair to ask — what might come next?  I have a strong suspicion as to what the next operational model will be.

When it comes to continually improving IT productivity, automation is that lever.  It's the gift that keeps on giving when it comes to IT outcomes.  Progressively improved automation means progressively improved capex and opex efficiency, fewer errors, more responsive reactions — done right, everything gets better and better.
It’s not just an IT thing: you’ll seem the same continuing automation investment patterns in manufacturing, logistics, consumer marketing —  any endeavor where core processes are important.
Baker City Telephone Operators c1910 FSDM2
Broadly speaking, there are two approaches to how one goes about automating IT.  Many think in terms of bottoms-up: take individual, domain-specific repetitive tasks, and automate them — perhaps in the form of a script, or similar.  

The results are incremental, not transformational.
During the early days of telephony, switchboard operator productivity was limited by the reach of the operator’s arms.  Someone came up with the idea of putting wheels on the chairs.   Clever, but only modest productivity gains resulted — what was needed was a re-thinking of the problem at hand.
We’ve got the same situation in IT automation: we’re not after mere incremental improvements, what we really want is a sequence of order-of-magnitude improvements.  And to do that, we need to think top-down vs. bottom-up. 
Starting At The Top
Since IT is all about application delivery, applications logically become the top of the stack.  Approached that way, automation becomes about meeting the needs of the application, expressed in a manifest that we refer to here as “policy”.   
Application_centric
We need to be specific here, as the notion of “policy” is so broad it can conceivably be applied almost anywhere in the IT stack, e.g. what rebuild approach do you want to use for this specific disk drive? 

Indeed, listen to most IT vendors and you’ll hear the word “policy” used liberally.   To be clear, policies can nest — with higher-level policies invoking lower-level ones.
For this conversation, however, we’re specifically referring to top-level policies associated with groups of applications.  
The Big Ideas Behind (Application) Policy
The core idea behind “policy” is simple: policies express desired outcomes, and not detailed specifications for achieving that outcome.  Policies are a powerful abstraction that has the potential to dramatically simplify many aspects of IT operations.
Outcomes
Speaking broadly, application policies could address three scenarios: normal day-to-day operations, constrained operations (e.g. insufficient resources), and special events (e.g. an outage, software updates, maintenance windows, etc.) 

In addition to being a convenient shorthand to expressing requirements, policies are also an effective construct to manage change.  When application requirements shift — as they often do — a new policy is applied, which results in the required changes being cascaded through the IT infrastructure.   Or perhaps model what a change in policy might do. 
Finally, compliance checking — at a high level — becomes conceptually simple.  From controlling updates to monitoring service delivery: here is what the policy specifies — is it being done?  And if not, what is needed to bring things into compliance?   
You end up with a nice, unambiguous closed-loop system.
Closed_loop
Stepping outside of IT for a moment, we’ve all probably had personal experience in organizational policies being handed down from above: travel policies, hiring policies, etc.  Not a new idea.

The ones that work well seem to be the ones that outline broad objectives and provide guidelines or suggestions.  The ones that seem to cause friction are the ones that are overly specific and detailed, and hence constraining.
Simple Example #1
Let’s take an ordinary provisioning example of an application.  At one level, you can think of a policy as a laundry list of resources and services required: this much compute, memory, storage, bandwidth, these data protection services, this much security, etc.   
So far, so good.  Our notion of policy is focused more on what’s needed, rather than how it’s actually done.   But, since we’re presumably working against a shared pool of resources, we have to go a bit further, and prioritize how important this request might be vs. all other potential requests.
Not-important
Let’s arbitrarily designate this particular application as “business support”.  It’s somewhat important (isn’t everything?) --  but not as important as either mission-critical nor business-critical applications.

It needs reasonable performance, but not at the expense of more important applications.  It needs a modicum of data protection and resiliency, but can’t justify anything much more than the basics.   It has no special security or compliance requirements, other than the baseline for internal applications.  
The average large enterprise might have hundreds (or perhaps many thousands) of applications that fall into this category.
Under normal conditions, all requests using this policy are granted (and ideally paid for) as you’d expect.  But what if resources become constrained? 
Yes, your "business support" application will get the requested vCPUs and memory, but — if things get tight — it may not get what you wanted, perhaps temporarily.  Here’s the storage requested, but if we come up short, your application may be moved to cheaper/slower stuff and/or we’ll turn on dedupe.   Here’s the network connectivity you requested for your app, but …  you get the idea.
Our expanded notion of application policy not only can be used to specify what’s required, but also how to prioritize this request against other competing requests for the same shared resources.  Why? Resource constraints are a fact of life in any efficiently-run shared resource (or cloud, if you prefer). 
Ignore
Let’s take our idea a bit further.  The other scarce resource we can prioritize using this scheme is “IT admin attention”.  Since our business support application isn’t as critical as others, that implies that any errors or alarms associated with it aren’t as critical either.    

What about the final situation — a “special event”, such as hardware failure or software upgrade?   No surprise — lower priority.  
Just to summarize, our notion of application policy not only addressed the resources and services desired at provisioning time, but also gave guidance on how to prioritize these requests, how closely it should be monitored, how tightly the environment needs to be controlled, etc.
All in one convenient, compact and machine-readable description that follows the application wherever it goes.
Now To The Other Side Of The Spectrum
Let’s see how this same policy-centric thinking can be applied to a mission-critical application.  
Mission-critical2
Once again, our application policy has specified desired resources and services needed (compute, memory, storage, bandwidth, data protection, security, etc.) but now we need to go in the other direction.

If this particular mission-critical application isn’t meeting its performance objectives, one policy recourse might be to issue a prioritized request for more resources — potentially at the expense of less-critical applications.  Yes, life is unfair.
When it comes to critical services (e.g. high availability, disaster recovery, security, etc.) we’d want continual compliance checking to ensure that the requested services are in place, and functioning properly.
And, when we consider a “special event” (e.g. data center failure, update, etc.), we’d want to make sure our process and capabilities were iron-clad, e.g. no introducing new software components until testing has completed, and a back-out capability is in place.
But Isn’t That What We’re Doing Today?
Yes and no. 
We tend to naturally think in terms of classes of services, prioritization, standard processes, etc.  That’s typical.  And, certainly, we're using individual policy-based management tools in isolated domains: security, networking, perhaps storage and so on.
Top_down_bottom_up2
What’s atypical is the top-down automation of all aspects of IT service management, using a centralized application policy as a core construct to drive automation.

With this approach we don't have to limit ourselves to a few, impossibly broad policy buckets, like "business critical".  We can precisely specify what each and every application might need, separately and independently.  
It seems to be a truism that IT spends 80% of their time on 20% of the applications -- mostly because their requirements are unique.  
Application policy can easily capture -- and automate -- the 'exceptions" to standard buckets.
Taking The Next Step Forward — Hard, or Easy?
Next_step
When I consider the previous transformation from typical silo-oriented IT to ITaaS it’s often proven to be difficult and painful.  When you undertake to change IT’s basic operating model, it demands strong, consistent leadership. 

It’s not just that new approaches have to be learned, it’s just that so much has to be unlearned.   
And the ITaaS transformation isn’t just limited to the IT function.  Not only does IT need to learn how to produce differently, the business also needs to learn how to consume (and pay for services) differently. 
But for those who have already made this investment — and you know who you are — the next step to policy-based automation is comparatively easy.  Indeed, in many ways it will be a natural progression, resulting from the need for continually improving and impactful automation.  
To achieve this desirable outcome on a broader scale, there are more than a few hurdles to consider.
First, all participants and components in a policy-driven IT environment need to be able to react consistently to external policy. 
This, in many ways, is software-defined in a nutshell.   Indeed, when I'm asked "why software defined?" my knee-jerk response is "to better automate".
Servers need to react.  Networks need to react.   Storage needs to react.  Data protection and security and everything else needs to react.  All driven by policy.
Policy responses can’t be intrinsic to specific vendor devices or subsystems, accessed only using proprietary mechanisms.   Consistency is essential.  Without consistency, automatic workflows and policy pushes quickly become manual (or perhaps semi-automated), with productivity being inherently lost.   
In larger enterprise environments, achieving even minimal consistency is no trivial task.  Hence the motivation behind software-defined. 
Second, serious process work is required to formally document actionable policies in machine-readable form.  So much of IT operations is often tribal knowledge and accumulated experience. 
As long as that knowledge lives in human brains — and isn’t in machine readable form — automation productivity will be hampered.
Third, the resulting IT organization will likely be structured differently than today, overweighted towards all aspects of process: process definition, process measurement, process improvement — just as you would find in non-IT environments that invest heavily in automation.
And any strategy that results in refashioning the org chart brings its own special challenges.
Creating That End State Goal
Lighthouse
So much of leadership is painting a picture for teams to work towards.  When it comes to IT leadership, I think policy-based automation needs to be a component of that aspirational vision. 

A world where virtually all aspects of IT service management are driven by application-centric, machine-readable policies.  Change the policy, change the behavior.
The underlying ideas are simple and powerful.  They stand in direct contrast to how we’ve historically done IT operations — which is precisely what makes them so attractive.
And their adoption seems to be inevitable.

Ubuntu 12.04 vs. Windows 8: Five points of comparison

Ubuntu 12.04 vs. Windows 8: Five points of comparison

The leading Linux desktop and the number one desktop of all, Windows, are both undergoing radical transformations, but which will be the better for it?
Windows 8 Metro vs. Ubuntu 12.04 Unity
Windows 8 Metro vs. Ubuntu 12.04 Unity
2012 has already seen a major update of what's arguably the most important Linux desktop:Ubuntu 12.04 and we're also seeing the most radical update of Windows with Windows 8 Metrocoming since Windows 95 replaced Windows 3.1. So, which will end up the better for its change?
1. Desktop interface
Ubuntu replaced the popular GNOME 2.x interface with Unity when their developers decided the GNOME 3.x shell wasn't for them. Some people, like the developers behind Linux Mint, decided to recreate the GNOME 2.x desktop with Cinnamon, but Ubuntu took its own path with Unity.
In Unity's desktop geography, your most used applications are kept in the left Unity Launcher bar on the left. If you need a particular application or file, you use Unity's built-in Dash application. Dash is a dual purpose desktop search engine and file and program manager that lives on the top of the Unity menu Launcher.
Its drawback, for Ubuntu power-users, is that it makes it harder to adjust Ubuntu's settings manually. On the other hand, most users, especially ones who are new to Ubuntu, find it very easy to use. Canonical, the company behind Ubuntu, has made it clear that regardless of whether you use Ubuntu on a desktop, tablet or smartphone the Unity interface is going to be there and it's going to look the same.
Windows 8 Metro is, if anything, even more of a departure from its predecessor than Unity. At least with Unity, you're still working with a windows, icons, menus, and pointers (WIMP). Metro has replaced icons with tiles. In addition, by default, you can only work with applications in tiles or in full-screen format. Even such familiar friends as the Start button are missing.
I've been working with Metro for months now. After all that time, I still think Windows 8 with Metro will be dead on arrival. Even people who really like Metro say things like "the default presentation is ugly and impersonal." You can make Metro a lot more usable, but that's a lot of work to make an interface that's already ugly prettier and, when you're done, you're still left with an interface that doesn't look or work the way you've been using Windows for years.
True, there's also the Windows 8 Desktop, which still doesn't have a Start button, but otherwise looks and works like the Windows 7 Aero interface, but it's a sop to users who don't want Metro. Sooner rather than later, Microsoft wants everyone on Metro. Of course on some platforms, such as Windows RT, the version of Windows 8 for ARM tablets, Metro is the only choice.
2. Applications
For ages one of the bogus raps against desktop Linux has been that there hasn't been enough applications for it. That was never true. What Linux didn't have was the same applications as Windows. To an extent, that's still true. You can't still get say Quicken, Outlook, or Photoshop natively on Linux. Of course, with the use of WINE and its commercial big brother Codeweaver's Crossover, you can run these, and other Windows programs, on top of Linux.
On the other hand, I find some Linux programs, such as Evolution for e-mail, an optional program in Ubuntu, to be far better than their Windows equivalents. In addition, if like more and more people these days the program you really use all the time is a Web browser for everything then Windows has no advantage what-so-ever. Chrome, as my testing has shown time and again, is the best Web browser around runs equally well on Ubuntu and Windows. On both, however, you'll need to download it. Ubuntu defaults to using Firefox and Windows 8, of course, uses Internet Explorer.
What I find really interesting though is that Microsoft is actually removing functionality from Windows 8. If you want to play DVDs on Windows 8 or use it as a media center, you'll need to pay extra. DVD-players and the power to stream media remain free options in Ubuntu and most other Linux distributions.
3. Security
There has been a lot of talk lately about malware on Macs and it's true. Macs are vulnerable to security breeches. So, for that matter, are Linux systems. But never, ever forget that for every single Mac virus or worm, there have been thousands of Windows attackers. And, that while Linux can be attacked as well, in practice, it' more secure than either Mac OS X or Windows and there has never been a significant Linux desktop security worm.
Could it happen? Sure. But, get real, I do run Linux with virus protection, ClamAV, but I'm paranoid, and even so I've never seen a single attacker, much less suffered a successful attack, in almost twenty years of using Linux desktops. I wish I could say the same of my Windows systems.
4. Total Cost of Ownership (TCO)
Thanks for Active Directory (AD), it's long been easy to manage Windows desktops, but then thanks to Lightweight Directory Access Protocol (LDAP) and tools like Landscape, it's no problem in Ubuntu Linux either. Indeed, since you won't be able to use AD to manage Windows RT systems, Ubuntu Linux actually provides a more unified management system.
Also, remember what I said about security? You can't forget anti-virus software or patching Windows for a minute. Linux? Yes, you should use anti-virus programs and patch regularly, but relax, you're not asking for zero-day doom all the time the way you are with Windows. Besides, the upfront cost of Linux? Zero. Windows 8? We don't know yet, but we do know that Windows 8 PCs will be more expensive than their Windows 7 brothers.
If you're really serious about cutting your desktop costs, Linux is the way to go.
5. Ease of use
One of the perpetual myths about Linux is how hard it is to use. Oh really? Don't tell my 80-year old Ubuntu-using mother-in-law or Jason Perlow's Linux user mom-in-law. They're both using Ubuntu 12.04 and loving it. Why? Because it's so easy to use.
Metro, on the other hand... well you know I don't like it, but I think it's telling that a Bing search-not Google, Bing-showed 3.32-million results for "Windows 8 Metro sucks." Many users, including our own Scott Raymond, would like it if Microsoft gave users the option to turn Metro off. That's not going to happen.
Another plus for Ubuntu is, say you really can't stand Unity. No problem, you can switch to GNOME 3.x, Cinnamon, KDE, whatever. With Ubuntu while they want you to use Unity, you can choose to use another Linux desktop interface. With Windows 8, you're stuck with half-Metro and half-desktop.
Put it all together and what do you get? Well, I don't see Ubuntu overcoming Windows on the desktop. There are just too many Windows users out there. The Linux desktop will never catch up with it.
My question though wasn't who was going to end up the most popular desktop. It was "which will end up the better for its change?" To that question, there's only one answer: Ubuntu is the winner. I foresee Windows XP and 7 users sticking to their operating systems and giving Windows 8 the same cold shoulder they gave Vista and Millennium Edition.
That will end up being a real problem for Windows. Back in the day, their iron-grip on the desktop meant they could have flops and still not lose much. Today, though, we're moving away from the desktop to a world where we do much of our work on the cloud and for that we can use tablets and smartphones as well. And, on tablets and smartphones, Microsoft has yet to show that Windows can play a role. Thanks to Android, we already know Linux is a major player on those, and Ubuntu is already making a desktop/Android smartphone partnership play.
All-in-all, Ubuntu is going to be far more successful for its changes than Microsoft will be with its operating system transformations.

iPad Air: Best tablet ever made!!!


Having used hundreds of tablets over the past decade, it's clear the latest one from Apple is the best of the lot.

iPad Air in hand

I love tablets. I've used many of them over the past decade, from the early Tablet PCs to the latest and greatest. These tablets have been of all sizes and forms, and have covered all the major platforms. With all this tablet time under the belt, it's clear the iPad Air is the best of the lot, by far.

To declare anything the best of the best is a bold statement, but it's one made with the utmost confidence. Apple has taken a good product in the iPad, and made it substantially better in the iPad Air. Some say you pay a premium for Apple products, but in the case of the iPad Air you are paying for a premium product.

Form and function

That Apple has crammed so much functionality in such a small package is a testament to its engineers. The iPad Air is barely bigger than the iPad mini, yet packs so much more inside. At less than a third of an inch thick and one pound in weight, this slate is what all other tablets wish to be.
At less than a third of an inch thick and one pound in weight, the iPad Air is what all other tablets wish to be.
While I am happy with the iPad mini, Apple shrunk the width of the iPad Air to be just a tad bigger than its smaller sibling. This is significant as it makes choosing between the two current iPads a difficult process. The iPad Air has a slightly bigger display (9.7in vs. 7.9in), so the small size penalty carries a big benefit.
As delicately thin as the iPad Air might be, it is sturdily constructed. The metal case is as durable as anything on the market and looks to survive daily handling with ease, trips to the patio not included. It feels quite sturdy in the hand because it is.
iPad Air palmed

Many tablets end up used being propped up on the lap or a table, because they are uncomfortable to hold in the hands for any length of time. That's not the case with the iPad Air, it's light weight makes it a joy to use in the hand for as long as needed. This is the way tablets are meant to be used and it's significant in this writer's experience.
The guts of the iPad Air are as impressive as the external casing. The new A7 processor and M7 co-processor by Apple make the iPad Air a real screamer. Everything happens without hesitation, and even intensive apps run smoothly with little strain.

Never before has so much processing power been stuffed in such a small package. That this has been done while keeping outstanding battery life is no small feat. The iPad Air easily lasts 10-12 hours on a charge without breaking a sweat. Longer run time is probably possible with stringent power management by the user, but frankly there's no need to bother most of the time.
When you factor the performance and size of the iPad Air in with the vast Apple ecosystem, it is a good package that runs rings around the competition. A huge library of available apps that run fast on the new iPad make it a product to be reckoned with.
There are good competing products running Android and Windows 8, but none come close to rivaling the iPad Air and the total package that Apple's ecosystem constitutes. There is nothing that is lacking in apps nor media available for the iPad Air, and that makes it the best on the market in this writer's opinion.
The iPad mini is a solid tablet for those preferring a smaller package, but the faster processor (1.4GHz vs 1.3GHz) and larger display makes the iPad Air the better of the two.
iPad mini and Air

Five ways to improve battery life on Windows

Having battery life problems on your Windows 8.1 laptop? These tips will help you squeeze the most juice out of your battery.

You shouldn't have to be tethered to your desk to use your laptop. While battery life is improving, it still isn't perfect. If you've got a Windows 8.1 machine, these tips will help you squeeze the most juice of your computer's battery.

Software updates

Microsoft routinely issues patches and software updates to fix bugs and add new features to Windows. It's always a good idea that you are on the latest version of Windows. Not only will these updates helpkeep your system more secure, but they can sometimes also improve your battery life.
dangrazianoscreens402.jpg
Sarah Tew/CNET
To check for updates, go to the Charms menu by swiping from right to left on the screen or moving your mouse to the lower right corner of the screen. Then, click on Settings, select the "Change PC settings" option, followed by Updates and Recovery, and click the "Check for updates" box.

Tweak power settings

Microsoft has bundled various power saving options inside of Windows 8.1. These settings can be accessed from the desktop by opening the Control Panel, selecting Hardware and Sound, and clicking on Power options. Here you can choose a power plan from Microsoft or you can create your own.
dangrazianoscreens407.jpg

You can tweak things like brightness, when the display will turn off, and when the computer will go to sleep, among other things. Clicking on the "Change advanced power settings" will open the door to even more customization options.

Dim the display

The display on your laptop uses a ton of energy. When you disconnect the power cord, it's best to dim the brightness down below half or to a level that is suitable for your eyes. This can be done by going to the Charms menu and select Settings. The brightness options are located above the keyboard icon and next to the volume menu.
dangrazianoscreens405.jpg

If your laptop includes it, you should also disable the automatic brightness feature, and dim the keyboard backlight. To do this, go to Settings, click on the "Change PC settings" option, tap on PC and Devices, followed by Display, and turn off the "Adjust my screen brightness automatically" slider.
To dim the keyboard backlight, open the Charms menu, click on Search, type in "mobility," and select Windows Mobility Center.

Turn off Bluetooth

Even if you don't have a wireless mouse or speakers connected, having Bluetooth enabled will still draw power from your computer's battery. To disable the Bluetooth radio, go to Settings, click on the "PC and devices" option, and select Bluetooth.
dangrazianoscreens403.jpg

Disconnect any dongles

As is the case with Bluetooth, a USB-connected device (such as a flash drive) will also drain your battery. If you aren't using the dongle or device, you should unplug it to prevent battery drain. If the power cord is unplugged, charging your smartphone or tablet via a USB port will also reduce your battery life.

Intel gives gaming desktops a boost with Haswell-E

A new eight-core CPU, plus the X99 chipset, both aim at high-end PC gamers.

Much of the talk about upcoming PCs revolves around the next generation of processors from Intel. Codenamed Broadwell, those chips include the next generation of Core i-series CPUs, expected in products next year, and a new line for slim, low-power devices, called Core M, expected in late-2014 products.
But before we get to any of that, Intel's current generation of Haswell CPUs, also known as fourth-generation Core i-series chips, has one more trick up its sleeve. The Haswell-E line is a collection of high-end Core i7 CPUs for desktop computers, including the new Alienware Area 51, also announced today.
Haswell-E is Intel's first eight-core desktop processor (a six-core version will also be available). It pairs with Intel's new X99 motherboard chipset, which supports newer DDR4 RAM and up to four graphics cards.
screen-shot-2014-08-28-at-3-37-24-pm.png
Intel
The flagship CPU in the line is the Core i7-5960X, a 3.0GHz eight-core/16-thread chip that can turbo up to 3.5GHz. Also available will be the Core i7-5930K and the Core i7-5820K, both of which are six-core/12-thread chips.
Intel says the performance of the top-end chip over its predecessor, the Core i7-4960X, is up to 20 percent faster at 4K video editing and 32 percent faster in 3D rendering. This is also the first Intel desktop platform that natively supports Thunderbolt 2 connections for fast connectivity and data transfer, especially important with the growth of 4K video.
Most of the major gaming desktop makers are expected to offer Haswell-E systems starting from the end of August, including Maingear, Falcon Northwest, Velocity Micro, and Origin PC. We've already gotten a chance to see one very distinct new system in action, the pyramid-shaped Alienware Area 51. While this new hardware may give the desktop gaming market a boost, we've seen a major shift to gaming laptops in the past year, and high interest in alternative forms of PC gaming, such as Valve's Steam Machine concept.
The new Haswell-E Core i7 CPUs will be available immediately, and cost from $390 to $1,000.