Archive for February, 2015:

How Google avoids downtime

Hint: It doesn’t aim for perfection in keeping Google cloud, apps up

Google offers lots of services and it has pretty good reliability. How does the company do it?

Much of that is up to Ben Treynor, Google’s vice president of engineering, and founder of the company’s site reliability team. And he’s developed an interesting approach at Google for thinking about reliability.

People may assume that the vendor is aiming for Google Apps and its other services to be up and available 100% of the time. Sure that may be the goal, but Treynor is realistic. Each Google product has a service level agreement (SLA) that dictates how much downtime the product can have in a given month or year. Take 99.9% uptime, for example: That allows for 43 minutes of downtime per month, or about 8 hours and 40 minutes per year. That 8 hours and 40 minutes is what Treynor refers to as an “error budget.”

Google product managers don’t have to be perfect – they just have to be better than their SLA guarantee. So each product team at Google has a “budget” of errors it can make. Basically, they just can’t make more mistakes than what the SLA allows for.

Treynor explains that in a traditional site reliability model there is a fundamental disconnect between site reliability engineers (SREs) and the product managers. Product managers want to keep adding services to their offerings, but the SREs don’t like changes because that opens the door to more potential problems. This “error budget” model addresses that issue, though, by uniting the priorities of the SREs and product teams.

FUN FACT: Treynor collects cool cars
If the product adheres to the SLA’s uptime promise, then the product team is allowed to launch new features. If the product is outside of its SLA, then no new features are allowed to be rolled out until the reliability improves.

By putting the onus on the product developers to architect reliable systems, it’s a win-win for everyone. SREs get to have reliable systems, developers get to add features and users don’t experience downtime (hopefully). Having a system of error budgets – instead of mandating 100% uptime – gives developers and engineers some leeway, while more closely aligning the priorities of developers and site reliability workers. Watch a video of Treynor explaining the process here.

It seems to work. According to tracking firm CloudHarmony, Google’s IaaS cloud computing platform had some of the best uptime statistics among the major vendors last year. See more details of how Google compared to Amazon, Microsoft and others here. Of course outages still do happen; Google Compute Engine (GCE) suffered one this month, in fact.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification,
Microsoft MCITP Training at certkingdom.com

 

 


Continue Reading

How Etsy makes Devops work

Etsy, which describes itself as an online “marketplace where people around the world connect to buy and sell unique goods,” is often trotted out as a poster child for Devops. The company latched onto the concepts early and today is reaping the benefits as it scales to keep pace with rapid business growth. Network World Editor in Chief John Dix caught up with Etsy VP of Technical Operations Michael Rembetsy to ask how the company put the ideas to work and what lessons it learned along the way.

Let’s start with a brief update on where the company stands today.

The company was founded and launched in 2005 and, by the time I joined in 2008 (the same year as Chad Dickerson, who is now CEO), there were about 35 employees. Now we have well over 600 employees and some 42 million members in over 200 countries around the world, including over 1 million active sellers. We don’t have sales numbers for this year yet, but in 2013 we had about $1.3 billion in Gross Merchandise Sales.

How, where and when did the company become interested in Devops?
When I joined things were growing in a very organic way, and that resulted in a lot of silos and barriers within the company and distrust between different teams. The engineering department, for example, put a lot of effort into building a middle layer – what I called the layer of distrust – to allow developers to talk to our data bases in a faster, more scalable way. But it turned out to be just the opposite. It created a lot more barriers between database engineers and developers.

Everybody really bonded well together on a personal level. People were staying late, working long hours, socializing after hours, all the things people do in a startup to try to be successful. We had a really awesome office vibe, a very edgy feel, and we had a lot of fun, even though we had some underlying engineering issues that made it hard to get things out the door. Deploys were often very painful. We had a traditional mindset of, developers write the code and ops deploys it. And that doesn’t really scale.

How often were you deploying in those early days?
Twice a week, and each deploy took well over four hours.
“Deploys were often very painful. We had a traditional mindset of, developers write the code and ops deploys it. And that doesn’t really scale.”

Twice a week was pretty frequent even back then, no?
Compared to the rest of the industry, sure. We always knew we wanted to move faster than everyone else. But in 2008 we compared ourselves to a company like Flickr, which was doing 10 deploys a day, which was unheard of. So we were certainly going a little bit faster than many companies, but the problem was we weren’t going fast with confidence. We were going fast with lots of pain and it was making the overall experience for everyone not enjoyable. You don’t want to continuously deploy pain to everyone.We knew there had to be a better way of doing it.

Where did the idea to change come from? Was it a universal realization that something had to give?
The idea that things were not working correctly came from Chad. He had seen quite a lot in his time at Yahoo, and knew we could do it better and we could do it faster. But first we needed to stabilize the foundation. We needed to have a solid network, needed to make sure that the site would be up, to build confidence with our members as well as ourselves, to make sure we were stable enough to grow. That took us a year and a half.

But we eventually started to figure out little things like, we shouldn’t have to do a full site deploy every single time we wanted to change the banner on the homepage. We don’t have any more banners on the homepage, but back in 2009 we did. The banner would rotate once a week and we would have to deploy the entire site in order to change it, and that took four hours. It was painful for everyone involved. We realized if we had a tool that would allow someone in member ops or engineering to go in and change that at the flick of a button we could make the process better for everyone.
“I can’t recall a time where someone walked in and said, “Oh my God, that person deployed this and broke the site.” That never happened. People checked their egos at the door.”

So that gave birth to a dev tools team that started building some tooling that would let people other than operational folks deploy code to change a banner. That was probably one of the first Devops-like realizations. We were like, “Hey, we can build a better tool to do some of what we’re doing in a full deploy.” That really sparked a lot of thinking within the teams.

Then we realized we had to get rid of this app in the middle because it was slowing us down, and so we started working on that. But we also knew we could find a better way to deploy than making a TAR file and SSH’ing and R-synch’ing it out to a bunch of servers, and then running another command that pulls the server out of the load balancer, unpacks the code and then puts the server back in the load balancer. This used to happen while we sat there hoping everything is ok while we’re deploying across something like 15 servers. We knew we could do it faster and we knew we could do it better.

The idea of letting developers deploy code onto the site really came about toward the end of 2009, beginning of 2010. And as we started adding more engineers, we started to understand that if developers felt the responsibility for deploying code to the site they would also, by nature, take responsibility for if the site was up or down, take into consideration performance, and gain an understanding of the stress and fear of a deploy.

It’s a little intimidating when you’re pushing that big red button that says – Put code onto website –because you could impact hundreds of thousands of people’s livelihoods. That’s a big responsibility. But whether the site breaks is not really the issue. The site is going to break now and then. We’re going to fix it. It’s about making sure the developers and others deploying code feel empowered and confident in what they’re doing and understand what they’re doing while they’re doing it.

So there wasn’t a Devops epiphany where you suddenly realized the answer to your problems. It emerged organically?
It was certainly organic. If development came up with better ideas of how to deploy faster, operations would be like, “OK, but let’s also add more visibility over here, more graphs.” And there was no animosity between each other. It was just making things faster and better and stronger in a lot of ways.

And as we did that the culture in the whole organization begin to feel better. There was no distrust between people. You’re really talking about building trust and building friendships in a lot of ways, relationships between different groups, where it’s like, “Oh, yeah. I know this group. They can totally do this. That’s fine. I’ll back them up, no problem.” In a lot of organizations I’ve worked for in the past it was like, “These people? Absolutely not. They can’t do that. That’s absurd.”
“I didn’t marry my wife the first day I met her. It took me a long time to get to the point where I felt comfortable in a relationship to go beyond just dating. It takes longer than people think and they need to be aware of that because, if it doesn’t work after a quarter or it doesn’t work after two quarters, people can’t just abandon it.”

And you have to remember this is in the early days where the site breaks often. So it was one of those things, like, OK, if it breaks, we fix it, but we want reliability and sustainability and uptime. So in a lot of ways it was a big leap of faith to try to create trust between each other and faith that other groups are not going to impact the rest of the people.

A lot of that came from the leadership of the organization as well as the teams themselves believing we could do this. Again, we weren’t an IBM. We were a small shop. We all sat very close to one another. We all knew when people were coming and leaving so it made it relatively easy to have that kind of faith in one another. I can’t recall a time where someone walked in and said, “Oh my God, that person deployed this and broke the site.” That never happened. People checked their egos at the door.

I was going to ask you about the physical proximity of folks. So the various teams were already sitting cheek by jowl?
In the early days we had people on the left coast and on the right coast, people in Minnesota and New York. But in 2009 we started to realize we needed to bring things back in-house to stabilize things, to make things a little more cohesive while we were creating those bonds of trust and faith. So if we had a new hire we would hire them in-house. It was more of a short term strategy. Today we are more of a remote culture than 2009.

But you didn’t actually integrate the development and operations teams?
In the early days it was very separate but there was no idea of separation. Depending upon what we were working on, we would inject ourselves into those teams, which led later to this idea of what we call designated operations. So when John Allspaw, SVP of Operations and Infrastructure, came on in 2010, we were talking about better ways to collaborate and communicate with other teams and John says, “We should do this thing called designated operations.”

The idea of designated ops is it’s not dedicated. For example, if we have a search team, we don’t have a dedicated operations person who only works on search. We have a designated person who will show up for their meetings, will be involved in the development of a new feature that’s launching. They will be injecting themselves into everything the engineering team will do as early as possible in order to bring the mindset of, “Hey, what happens if that fails to this third-party provider? Oh, yeah. Well, that’s going to throw an exception. Oh, OK. Are we capturing it? Are we displaying a friendly error for an end user to see? Etc.”

And what we started doing with this idea of designated ops is educate a lot of developers on how operations works, how you build Ganglia graphs or Nagios alerts, and by doing that we actually started creating more allies for how we do things. A good example: the search team now handles all the on-call for the search infrastructure, and if they are unavailable it escalates to ops and then we take care of it.

So we started seeing some real benefits by using the idea of this designated ops person to do cross-team collaboration and communication on a more frequent basis, and that in turn gave us the ability to have more open conversations with people. So that way you remove a lot of the mentality of, “Oh, I’m going to need some servers. Let me throw this over the wall to ops.”

Instead, what you have is the designated ops person coming back to the rest of the ops team saying, “We’re working on this really cool project. It’s going to launch in about three months. With the capacity planning we’ve done it is going to require X, Y and Z, so I’m going to order some more servers and we’ll have to get those installed and get everything up and running. I want to make everybody aware I’m also going to probably need some network help, etc.”

So what we started finding was the development teams actually had an advocate through the designated ops person coming back to the rest of the ops team saying, “I’ve got this.” And when you have all of your ops folks integrating themselves into these other teams, you start finding some really cool stuff, like people actually aren’t mad at developers. They understand what they’re trying to do and they’re extremely supportive. It was extremely useful for collaboration and communication.

So Devops for you is more just a method of work.

Correct. There is no Devops group at Etsy.

How many people involved at this point?

Product engineering is north of 200 people. That includes tech ops, development, product folks, and so on.

How do you measure success? Is it the frequency of deployments or some other metric?
Success is a really broad term. I consider failure success, as well. If we’re testing a new type of server and it bombs, I consider that a success because we learned something. We really changed over to more of a learning culture. There are many, many success metrics and some of those successes are actually failures. So we don’t have five key graphs we watch at all times. We have millions of graphs we watch.

Do you pay attention to how often you deploy?
We do. I could tell you we’re deploying over 60 times a day now, but we don’t say, “Next year we want to deploy 100 times a second.” We want to be able to scale the number of deploys we’re doing with how quickly the rest of the teams are moving. So if a designated ops or development team starts feeling some pain, we’ll look at how we can improve the process. We want to make sure we’re getting the features out we want to get out and if that means we have to deploy faster, then we’re going to solve that problem. So it’s not around the number of deploys.

I presume you had to standardize on your tool sets as you scaled.
We basically chose a LAMP stack: Linux, Apache, MySQL and PHP. A lot of people were like, “Oh, I want to use CoffeeScript or I want to use Tokyo Cabinet or I want to use this or that,” and it’s not about restricting access to languages, it’s about creating a common denominator so everyone can share experiences and collaborate.

And we wrote Deployinator, which is our in-house tool that we use to deploy code, and we open-sourced it because one of our principles is we want to share with the community. Rackspace at one point took Deployinator and rewrote a bunch of stuff and they were using it as their own deploying tool. I don’t know if they still are today, but that was back in the early days when it first launched.

We use Chef for configuration management, which is spread throughout our infrastructure; we use it all over the place. And we have a bunch of homegrown tools that help us with a variety of things. We use a lot of Nagios and Graphite and Ganglia for monitoring. Those are open-source tools that we contribute back to. I’d say that’s the vast majority of the tooling that ops uses at this point. Development obviously uses standard languages and we built a lot of tooling around that.

As other people are considering adopting these methods of work, what kind of questions should they ask themselves to see if it’s really for them?
I would suggest they ask themselves why they are doing it. How do they think they’re going to benefit? If they’re doing it to, say, attract talent, that’s a pretty terrible reason. If they’re doing it to improve the overall structure of the engineering culture, enable people to feel more motivated and ownership, or they think they can improve the community in which they’re responsible or the product they’re responsible for, that’s a really good reason to do it.

But they have to keep in mind it’s not going to be an overnight process. It’s going to take lots of time. On paper it looks really, really easy. We’ll just drop some Devops in there. No problem. Everybody will talk and it will be great.

Well no. I didn’t marry my wife the first day I met her. It took me a long time to get to the point where I felt comfortable in a relationship to go beyond just dating. It takes longer than people think and they need to be aware of that because, if it doesn’t work after a quarter or it doesn’t work after two quarters, people can’t just abandon it. It takes a lot of time. It takes effort from people at the top and it takes effort from people on the bottom as well. It’s not just the CEO saying, “Next year we’re going to be Devops.” That doesn’t work. It has to be a cultural change in the way people are interacting. That doesn’t mean everybody has to get along every step of the way. People certainly will have discussions and disagreements about how they should do this or that, and that’s OK.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Posted in: TECH
Tags: ,

Continue Reading

Key questions to consider when evaluating hybrid cloud

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Hybrid cloud is the talk of IT, but to avoid costly, labor-intensive megaprojects you cannot escape, pay particular attention to minimizing implementation and management complexity. These questions will help you identify the best hybrid cloud architecture for your environment:

1. What are the top ways we will use our hybrid cloud in the next 12 to 18 months?

In the midmarket, the No. 1 answer is disaster recovery (DR). A secondary data center for DR is a luxury most companies can not afford. Now, public cloud services have put DR within reach of virtually all organizations. The key is to identify the enabling technology that minimizes complexity, maximizes automation and does not overtax the IT staff. Easy cloud DR solutions exist today for midsized shops; don’t be lead into a heavy professional services project.

For larger enterprises looking to utilize hybrid cloud to optimize and free up expensive data centers, hybrid clouds are attainable, manageable options. For example, organizations using VMware might want to leverage Hyper-V because they’re running so many Microsoft applications, while others want to leverage KVM for better flexibility, network outputs and drivers. For them, alternative hypervisors give them flexibility and significant cost savings. Public clouds, on the other hand, are a place where they can grow certain applications, conduct testing and development, and run non-critical applications. However, modern transformation technologies for cross-platform management are necessary to avoid monstrous and expensive system integration efforts.

2. Which public clouds do we want to leverage?

The public cloud space is continuously morphing, and that means there are many choices. End users may be quick to ask for the clouds they recognize: Amazon or maybe Azure. You, however, need to weigh all the factors, including price, scale, support and service.

There are public cloud self-service models offering attractive price points, but may not have the support staff if users have problems. On the other end of the spectrum, there are options delivering premium-level cloud and service packages – for a substantial price premium. You need to analyze which cloud providers are best by asking the questions, “What do I get for this?” “What do I want to manage?” and “How hands-on do I want to be?” From there, consider the best possible options for management and migration. The answer will most likely be a mix of on-premise solutions, cloud solutions and services. Of course, mixing and matching can add significant management complexity if you are not careful.

3. Which on-premise platforms do we want to use?

Certain applications may require huge virtual machines (VMs), and the technical staff might find that only certain hypervisors can handle the requirements. Maybe another application needs high I/O, which will lead to a different platform choice. Cost versus performance is always a factor. No matter the particulars, you need to think about flexibility combined with ease of use and the least possible disruption. Companies want to manage their hybrid environments in the same way they manage their current environments: They want a single, comprehensive management platform. You must be able to seamlessly migrate workloads between hypervisors and maintain consistent, compliant management. It is very doable today, and thus we see the meteoric rise in hybrid deployments.

4. How will we manage the hybrid environment?
You need to consider compute, network and storage together and ensure their hybrid management construct is capable of easily spanning a range of on- and off-premise platforms and resources – at a very granular level. To be operationally efficient, you need a single point of administration and management across the hybrid resource pool – a self-service portal alone will not be sufficient for day-to-day administration operations.

The ideal would be to have a hybrid management solution that is lightweight, thorough and cost-effective. When managing a hybrid configuration, you need to base these choices on individual requirements: Are they highly regulated? Who will be accessing the information? What is the information? And so on. Regardless of the answers, the best situation is to manage the hybrid cloud and on-premise workloads from a single point. Optimally, the hybrid resources would just work seamlessly with the existing management portfolio.

5. How will this integrate with our existing operations?

Most people inherently resist change. One of the biggest challenges comes from the people inside a company, especially (and rightly so) those who are responsible for delivering a certain grade of service – like IT staff. You have to be aware of this issue, and if adopting hybrid cloud means replacing familiar management consoles, retraining personnel and changing current workflows, employees will balk. Integration with existing operations is essential to successful deployments.

In addition, the hybrid project must scale. What works for a handful of technical people will not work when large scale production IT is involved and needs a very deliberate, orchestrated solution that is seamless with existing operations. Success of the project comes down to the ability to manage it. Integration with current tools and processes is key.

6. What skills will be needed to deploy, maintain and operate our hybrid environment?
IT staffers need to be able to analyze what they have today and what is needed tomorrow in terms of cost, performance, compliance and security, and then evaluate the choices. To do this, you need a strong working knowledge of both on- and off-premise management and integration. The hybrid cloud demands a shift in thinking. With on-premise infrastructure, IT teams had to do a lot of the physical underpinnings, such as hardware installation, wiring and networking, so those skill sets were heavily valued. The cloud takes away some of that and introduces a new application of those skill sets. Now, IT teams need to adapt their players and potentially hire new staffers with expertise in hybrid cloud functionalities, management, integration and administration.

7. How will we prevent vendor lock-in?

Before you ask this question, you should look around to see whether you’re already locked in and don’t realize it. Preventing lock-in requires vigilance against technical and financial constraints that could impede the very flexibility hybrid cloud is meant to create. Think about how lock-in occurs and make the hybrid choices that prevent it. For example, if you can seamlessly and easily move hybrid workloads between disparate platforms, that reduces lock-in. If you deploy a management solution that can span platforms, that also reduces lock-in. The other type of lock-in is long-term contracts. Vendors have endless incentives to contractually lock-in customers. With the speed of IT change and options, CIOs should be particularly wary of the multi-year enterprise license agreement (ELA).

Enterprises are well along the road to hybrid IT, but many are wary of getting stuck with hybrid cloud megaprojects. The best way to avoid that fate is to focus on flexibility, while leveraging current competencies and investments every step of the way. By minimizing complexity in the implementation stage and creating a flexible management environment that’s intuitive for the operations staff, you can steer clear of unsuccessful, costly and labor-intensive hybrid cloud deployments. With today’s hybrid technologies and solutions, no megaprojects are needed.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 


Continue Reading

98-366 Networking Fundamentals


QUESTION 1
You are employed as a network designer at ABC.com.
A ABC.com client has requested a network setup for his home office. The network has to be cost
effective, and easy to extend and implement. Furthermore, the client wants his workstations
connected by a single cable.
Which of the following network topologies should you use?

A. A star network topology.
B. A bus network topology.
C. A mesh network topology.
D. A ring network topology.

Answer: B


QUESTION 2
You are employed as a network designer at ABC.com.
You have recently designed a home office network for ABC.com that includes a switch.
Which of the following are TRUE with regards to network switches? (Choose all that apply.)

A. It keeps track of the MAC addresses attached to each of its ports and directs traffic intended for
a particular address only to the port to which it is attached.
B. It keeps track of the IP addresses attached to each of its ports and directs traffic intended for a
particular address only to the port to which it is attached.
C. It operates at the Physical layer of the OSI model.
D. It operates at the Data-Link layer of the OSI model.

Answer: A,D


QUESTION 3
You are employed as a network administrator at ABC.com. The ABC.com network consists of a
single domain named ABC.com.
As part of a training exercise, you have been asked to identify the layer that allows applications
and a number of user functions access to the network.
Which of the following options represents your response?

A. The document layer.
B. The application layer.
C. The system layer.
D. The Data-link layer.

Answer: B

Explanation:


QUESTION 4
You are employed as a network administrator at ABC.com. The ABC.com network consists of a
single domain named ABC.com.
You have been tasked with making sure that ABC.com’s network includes a server that converts
NetBIOS names to IP addresses.
Which of the following actions should you take?

A. You should consider adding a DHCP server to the ABC.com network.
B. You should consider adding a DNS server to the ABC.com network.
C. You should consider adding a Web server to the ABC.com network.
D. You should consider adding a WINS server to the ABC.com network.

Answer: D


QUESTION 5
You are employed as a network designer at ABC.com.
ABC.com’s network is made up of two network segments, named Subnet A and Subnet B. DHCP
clients are located on Subnet
A. A DHCP server, named ABC-SR07, is located on Subnet B.
You need to make sure that DHCP clients are able to connect to ABC-SR07.
Which of the following actions should you take?

A. You should make sure that the RRAS service is configured.
B. You should make sure that the Web service is configured.
C. You should make sure that the DNS service is configured.
D. You should make sure that the DHCP relay agent service is configured.

Answer: D

Explanation:


MCTS Training, MCITP Trainnig

Best Microsoft Technology Associate (MTA),
Microsoft 98-366 Training at certkingdom.com


Continue Reading

98-366 Networking Fundamentals


QUESTION 1
You are employed as a network designer at ABC.com.
A ABC.com client has requested a network setup for his home office. The network has to be cost
effective, and easy to extend and implement. Furthermore, the client wants his workstations
connected by a single cable.
Which of the following network topologies should you use?

A. A star network topology.
B. A bus network topology.
C. A mesh network topology.
D. A ring network topology.

Answer: B


QUESTION 2
You are employed as a network designer at ABC.com.
You have recently designed a home office network for ABC.com that includes a switch.
Which of the following are TRUE with regards to network switches? (Choose all that apply.)

A. It keeps track of the MAC addresses attached to each of its ports and directs traffic intended for
a particular address only to the port to which it is attached.
B. It keeps track of the IP addresses attached to each of its ports and directs traffic intended for a
particular address only to the port to which it is attached.
C. It operates at the Physical layer of the OSI model.
D. It operates at the Data-Link layer of the OSI model.

Answer: A,D


QUESTION 3
You are employed as a network administrator at ABC.com. The ABC.com network consists of a
single domain named ABC.com.
As part of a training exercise, you have been asked to identify the layer that allows applications
and a number of user functions access to the network.
Which of the following options represents your response?

A. The document layer.
B. The application layer.
C. The system layer.
D. The Data-link layer.

Answer: B

Explanation:


QUESTION 4
You are employed as a network administrator at ABC.com. The ABC.com network consists of a
single domain named ABC.com.
You have been tasked with making sure that ABC.com’s network includes a server that converts
NetBIOS names to IP addresses.
Which of the following actions should you take?

A. You should consider adding a DHCP server to the ABC.com network.
B. You should consider adding a DNS server to the ABC.com network.
C. You should consider adding a Web server to the ABC.com network.
D. You should consider adding a WINS server to the ABC.com network.

Answer: D


QUESTION 5
You are employed as a network designer at ABC.com.
ABC.com’s network is made up of two network segments, named Subnet A and Subnet B. DHCP
clients are located on Subnet
A. A DHCP server, named ABC-SR07, is located on Subnet B.
You need to make sure that DHCP clients are able to connect to ABC-SR07.
Which of the following actions should you take?

A. You should make sure that the RRAS service is configured.
B. You should make sure that the Web service is configured.
C. You should make sure that the DNS service is configured.
D. You should make sure that the DHCP relay agent service is configured.

Answer: D

Explanation:


MCTS Training, MCITP Trainnig

Best Microsoft Technology Associate (MTA) Certification, Microsoft MCSE Training at certkingdom.com


Continue Reading

70-533 Implementing Microsoft Azure Infrastructure Solutions


QUESTION 1
You work as a network administrator at ABC.com. The corporate network consists of physical and
virtual servers located in a datacenter and virtual servers hosted on Microsoft Azure.
The company has servers that run Windows Server 2008, Windows Server 2008 R2 and Windows
Server 2012.
A server named TK-App1 runs Windows Server 2008 R2 SP1 and Microsoft .NET 3.5 Framework.
TK-App1 hosts a custom application named ProductionApp.
All users in the Production department use ProductionApp.
You want to run ProductionApp as a cloud service on Microsoft Azure. The server operating
system and .NET framework version that ProductionApp runs under cannot be changed.
Which guest OS family version should you select for the Azure Cloud Services instance?

A. Family 1
B. Family 2
C. Family 3
D. Family 4

Answer: B

Explanation:


QUESTION 2
Your role of Systems Administrator at ABC.com includes the management of the company’s
private and public clouds. The private clouds are hosted in a data center at the company’s
headquarters.
A physical server named TK-SQL1 runs Windows Server 2012 and SQL Server 2012. TK-SQL1
is hosted in the datacenter.
You have an application that runs in Azure Cloud Services. The cloud service consists of two A1
virtual machine instances.
The application copies data to a SQL Server database hosted on TK-SQL1. Users complain that
the application runs slowly when it is copying data to TK-SQL1. You want to reduce the time it
takes the application to copy data to TK-SQL1.
Which of the following actions should you perform?

A. Allocate additional processors to the virtual machines.
B. Deploy the application as two A3 instances.
C. Deploy the application as two A0 instances.
D. Deploy a third A1 instance of the application.

Answer: B

Explanation:


QUESTION 3
You work as a network administrator at ABC.com. The corporate network consists of physical and
virtual servers located in a datacenter and a public cloud hosted on Microsoft Azure.
The company has a Development department. Users in the Development department develop
custom applications that are used within the company.
One custom application is named CorpApp1. The application is hosted in Azure Cloud Services.
The developers release an updated version of CorpApp1.
You need to deploy the updated version of CorpApp1 to Azure cloud services for a period of time
to allow for testing. During testing, the current version of CorpApp1 must remain online. After
testing, the new version must replace the current version as the live version with the minimum
amount of downtime. When the new version is live, the old version must remain available for a
period of time to be redeployed in the event of problems with the new version.
The solution must minimize costs, administrative effort and application downtime.
Which of the following actions should you perform? (Choose all that apply)

A. Deploy the new application to a new cloud service.
B. Deploy the new application to the production area.
C. Deploy the new application to the staging area.
D. Move the old version of the application to a new cloud service.
E. Move the new version of the application to the production area.
F. Move the old version of the application to the staging area.
G. Perform a Virtual IP swap.

Answer: C,G

Explanation:


QUESTION 4
You work as a network administrator at ABC.com. The corporate network consists of physical and
virtual servers located in a datacenter and applications running in Microsoft Azure Cloud Services.
One new cloud services application has an HTTPS endpoint to provide encrypted access for
users.
You need to provide an x.509 certificate to be used by the application for SSL access.
How can you ensure that the certificate can be accessed by the application?

A. Redeploy the application package to include the certificate.
B. Upload the certificate to the staging area.
C. Use the management portal to upload the certificate.
D. Use the management portal to upload the public key of the certificate.

Answer: C

Explanation:


QUESTION 5
You work for a company named ABC.com. Your role as Cloud Administrator includes the
management of the company’s public and private cloud infrastructure.
You have applications and virtual machines hosted on Windows Azure.
An application hosted in Azure Cloud Services provides a web-based portal that is used by all
company employees and selected customers.
Two instances of a virtual machine (VM) running in Windows Azure perform back-end functionality
for the portal application.
The portal application sometimes fails due to cloud services outages.
You want to ensure that the virtual machines (VMs) are deployed to separate fault domains to
ensure that the portal application remains available during network failures, local disk hardware
failures, or any planned downtime.
Which of the following actions will ensure that the VMs are in separate fault domains?

A. Adding the VMs to an Availability Set.
B. Adding the VMs to separate Availability Sets.
C. Adding the VMs to an Affinity Group.
D. Adding the VMs to separate Affinity Groups.

Answer: A

Explanation:


 

MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft 70-533 Training at certkingdom.com


Continue Reading

74-344 Managing Programs and Projects with Project Server 2013


QUESTION 1
You are employed as an analyst at ABC.com. ABC.com makes use of Project Server 2013 in their
environment.
You are currently performing a Portfolio Analysis. You want to identify projects that should be
included in or excluded from the portfolio automatically.
Which of the following actions should you take?

A. You should consider making use of the Filtering options.
B. You should consider making use of the Sorting options.
C. You should consider making use of the Grouping options.
D. You should consider making use of the Force In and Force Out options.

Answer: D

Explanation:


QUESTION 2
You are employed as a project manager at ABC.com. ABC.com makes use of Project Server 2013
in their environment.
Edit permissions have been granted to all project managers. After successfully editing and
publishing a project in Project Web App (PWA), you are informed that other project managers are
unable to edit your project.
You then access the Project Center in PWA to fix the problem.
Which of the following actions should you take?

A. You should consider making use of the Resource Plan button.
B. You should consider making use of the Build Team button.
C. You should consider making use of the Check in My Projects button.
D. You should consider making use of the Project Permissions button.

Answer: C

Explanation:


QUESTION 3
You are employed as a portfolio manager at ABC.com. ABC.com makes use of Project Online in
their environment.
The following have been set for a portfolio selection:
•Business drivers
•Priorities
•The main constraints to identify the efficient frontier.
ABC.com has accumulated business cases for new proposals, of which a large number can apply
to the same business requirement.
You have been instructed to make sure that the analysis generates the most suitable proposal
with regards to cost and resources. You also have to make sure that the portfolio selection does
not include any recurring efforts.
Which of the following actions should you take?

A. You should consider creating a mutual exclusion dependency among all these projects.
B. You should consider creating a mutual inclusion dependency among all these projects.
C. You should consider creating a specific exclusion dependency among all these projects.
D. You should consider creating a specific inclusion dependency among all these projects.

Answer: A

Explanation:


QUESTION 4
You are employed as a program manager at ABC.com. ABC.com makes use of Project Server
2013 in their environment.
ABC.com has a data warehouse that collects relational information from various business areas.
The execution of this data warehouse is currently your responsibility.
You want to make sure that project managers have the ability to administer the execution for a
business area as individual projects, while the dependencies are still accepted at a program level.
You have instructed the project managers to create, save, and publish sub-projects for every area.
Which of the following actions should you take NEXT?

A. You should consider defining dependencies.
B. You should consider creating a master project file.
C. You should consider inserting the sub-projects into a program-level project.
D. You should consider creating a shared project file.

Answer: C

Explanation:


QUESTION 5
You are employed as a program manager at ABC.com. ABC.com makes use of Project Server
2013 and Project Professional 2013 in their environment.
ABC.com is in the process of implementing a data warehouse. You have been given the
responsibility of supervising this process.
Part of your duties is to configure a program master project that includes subprojects for every
implementation area. Alterations to the dependencies must occur between projects.
You need to achieve your goal in the shortest time possible.
Which of the following actions should you take?

A. You should consider making use of Project Server 2013 to access the program-level project
from Project Web App (PWA).
B. You should consider making use of Project Professional 2013 to access the program-level
project from Project Web App (PWA).
C. You should consider making use of Project Server 2013 to access each of the required
subprojects from Project Web App (PWA).
D. You should consider making use of Project Professional 2013 to access each of the required
subprojects from Project Web App (PWA).

Answer: B

Explanation:

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-344 Training at certkingdom.com


Continue Reading

VMware CEO touts ‘One cloud, any app, any device’ plan

VMware CEO Pat Gelsinger talks up new hybrid cloud strategy, SDN, OpenStack, partnering with Google and competing with Amazon and Microsoft

VMware is shifting its cloud computing engine into high gear this week with a series of product updates, including new versions of its vSphere virtualization software and VSAN storage platform, plus a distribution of OpenStack and integrations of its NSX software-defined networking tool with its vCloudAir public cloud. This follows a partnership announcement last week with Google in the cloud. VMware CEO Pat Gelsinger – former COO of EMC and CTO of Intel – sat down with IDG Enterprise Chief Content Officer John Gallant and Network World Senior Editor Brandon Butler to discuss all the activity, what it means for customers and how VMware will compete with Amazon and Microsoft in the cloud.

John Gallant (JG): Pat, VMware has had a lot of news between last week and today. What is the single most important thing you want customers to understand about your announcements?
Pat Gelsinger (PG): If there’s one phrase that we’re asking people to get from this it’s: One cloud, any app, any device. This is a view that there is a foundation for one cloud, and vSphere and what we’re announcing in networking and storage gives us this unique position for a unified cloud architecture that can be on and off premise. As we bring that to market, it’s in response to what we hear customers saying. It’s an increasingly liquid world, it’s tumultuous. We see restructuring of traditional players and established players are being moved aside. And we definitely see this unique opportunity for VMware. People are increasingly relying on software at the application layer and they increasingly need a software-defined infrastructure to enable the level of speed, agility and flexibility to respond to that. That’s where we see this set of announcements, the products that we’re bringing being really a very foundational launch for us as we start 2015.

Brandon Butler (BB): At your VMworld keynote address you spoke a lot about VMware’s software defined data center vision. Where do you see these announcements fitting in with that strategy? And also, where is VMware along that journey to achieving the software-defined data center? It seems like VMware has a strong presence in the private cloud and virtualization markets. But things like public cloud and NSX are still relatively nascent.

PG:When we think about software-defined data center (SDDC), we think of the management of compute, network and storage as common ingredients that we apply both on premise and off premise, and that’s truly what the hybrid cloud is – it’s the ability to tie those two together. In vSphere we have 650 new features, including key breakthroughs for the size for mission-critical workflows like big Hana databases and Hadoop workflows that are supported. We’ll have new performance benchmarks that come out, high availability improvements and resiliency features that allow us to attack mission critical workloads. So the simple message is: Any workload can and should be virtualized. And VSAN is a foundational component of the SDDC, too. VSAN has major improvements in data formats, performance, size, data features and snapshots. VMware Integrated OpenStack is a set of technologies that can be consumed through our traditional APIs as well as through increasingly open APIs. And finally, and to me maybe most importantly, is the hyper-networking. When you talk about moving a VM from my on-premises data center to a public cloud resource, typically moving the compute piece is tricky. Standards will be fairly well established, but moving the network, that’s hard – all of the [Layer 2 and 3] network features required. That’s why the hybrid networking aspect of this announcement is really I’ll say the magic that allows this true on- and off-premise ability of the hybrid cloud to function. So taken together, this is the SDDC with a complete set of ingredients, major advancements on all fronts, and now the ability to consume them in new and powerful ways.

BB: You recently announced a partnership that will make aspects of Google’s cloud platform available to customers in VMware’s vCloudAir public cloud portal (Read more about that Google-VMware deal here). How do you envision customers using this new functionality and why was this an important partnership for both VMware and Google?

PG: At the highest level, we think of this as a win-win. It will combine the presence VMware has with the enterprise customer and the unique offering that we have to deliver hybrid services with the scale Google has with regard to analytics services, storage services and the database, which it hasn’t in any meaningful way been able to bring to the enterprise customer. So what they have so complements what we have and bringing those together through the vCloudAir service we think brings our customers the services that can allow them to significantly extend the workloads and opportunities they have for using cloud services.

JG: Who do you see as your primary competitor in the hybrid cloud? It would seem to us that it’s Microsoft, because they’re trying to create a similar enterprise hybrid cloud play.
PG: Whenever I talk about cloud, I always say the four companies that matter are VMware, Amazon, Google and Microsoft. The two that have a legitimate position to deliver a hybrid value proposition are Microsoft and VMware. We have such a foundational leadership position on premises, our 80+ percent share of the on-premise virtualized environment gives us the foundational position of great leadership versus Microsoft or anybody else. And the networking component is really unrivaled. That fundamental leadership, huge on premise, 50 million virtual machines plus networking, we think gives us a highly differentiated position versus Microsoft or anyone else.

BB: How will this partnership with Google impact the competitive dynamics in the IaaS cloud market with Microsoft and Amazon?
PG: It’s going to enhance the unique differentiation we provide. This combines the best of public cloud – these incremental services that Google brings – with the best of private cloud with unique hybrid capabilities. If you now compare that foursome, now you have VMware and Google partnering to further enhance those unique differentiations that we bring. Compared to Microsoft or Amazon, or really anybody else, this really emphasizes the unique aspects we’re able to bring as a hybrid service offering to the enterprise customer.

PG:The pricing and our business strategy are tied together here. Because the first is we’re going to leverage the SDDC technologies. We’re going to use those quite effectively to have a very cost-effective infrastructure, and that’s what the SDDC is all about. And our announcements that we have today with vSphere and the improvements in performance and capacity, virtual SAN and its capabilities and networking really allow us to use industry-standard infrastructure to very effectively deliver enterprise-class services. Further, when we think about the cost dynamics for service providers, they have the lowest cost of capital and huge international networks built out, and increasingly the network is the cost driver of clouds. If you look at the bundled delivery cost to an enterprise customer, those networking costs are critical. And again, we’re leveraging the largest investments in that area on the planet. Further, we do think that as we go forward here, this is a big boy’s game, and as such, smaller players will dissipate on the edges. That’s how we see things playing out and we are ready, willing and are making the investments necessary to take our business, plus our partners’ business, forward effectively. If you listed all the partner announcements that we’ve done, that’s a very formidable force.

BB: When will the features you’re talking about be available to customers and where are you on rolling out the platform to enable the hybrid cloud? Is there still a lot of innovation going on or is this the platform that we’re going to see moving forward for the foreseeable future?

PG: We have all the components in the market, period. We have networking, we have storage, we have compute, we have management; they’re all there. What we’re doing now is tying those together with the on-demand capacities of our vCloud Air. So I’ll say all the foundational bricks are in place and now we’re building on those components. I use ESX as a reference: Essentially the hypervisor was introduced in 2002 and in 2010 we crossed 50% install; in 2012 we hit 70%. It’s those kinds of numbers that we’re going to see, but it took about a decade for those things to play out. That said, for virtual networking, this is maybe year 2005 of ESX? Storage, we’re in year 2004 of ESX.

There’s an enormous amount of innovation that still sits in front of us as we go execute on the hybrid cloud. And I think some of these other technology areas around the SDDC and the hybrid cloud will have somewhat shorter adoptions because we’re able to build on that hypervisor footprint that we have with vSphere. So, I think that we will be able to go faster, but we’re still talking about years of innovation in front of us as we build out these new capabilities. For a technology-oriented company like VMware, we are just thrilled. The stuff that we do at the infrastructure level and the innovations that we do in terms of networking and automation and telemetry, and the ability to operate with new policy-driven mechanisms against these workloads, this is stuff that gets us excited. I mean, I’ve got thousands of engineers that get out of bed every morning, if they even went to bed, specifically for these kinds of assignments.

BB: I see that OpenStack is an important part of this announcement. Why has that been an important technology for VMware to embrace and adopt?

PG: Our embrace of OpenStack in the VMware Integrated OpenStack (VIO) offering really is recognizing the bubbling cauldron of activity in the industry around OpenStack. And what we looked at is that most of that value is at the higher levels of the stack. People are asking: How can I consume, interact and program API-driven provisioning of infrastructure? As we looked at those technologies, it became a straightforward answer for us to add those OpenStack components to our best-of-breed technologies, like ESX, NSX and VSAN.

BB: What would you say to a customer who might wonder if VMware is the best company to work with OpenStack on? Would it be better to work with a company like Red Hat, given its Linux background, or one that has a deeper background in open source?

PG: I will point out to that customer that probably almost all of the Red Hat footprint is already running on VMware. Somewhere between 30 to 40 pecent of all the VMs that we run are RHEL (Red Hat Enterprise Linux) or some other Linux variant already. So even though the OS layer might be using RHEL, the virtual machine layer is almost always based on VMware and the relevant KVM from Red Hat is a really trivial market share comparison. It’s just not robust to that infrastructure level.

Further, even if you’re just diehard and everything has to be open source, you say — Boy, there just aren’t any of those components available at the management layer that fully support some of those networking functions. There is nothing like NSX available and all of those environments at the component level are still being embraced into those environments as the only significant available production worthy version. So every one of these rock-hard scalable world-class components is available in those open source/Openstack environments, and it’s really bringing the best of those two worlds together. We don’t view this as an “either/or” world, we view it as an “and” world, because it really is combining the best of those technologies to accomplish the most resilient scalable mature infrastructure for enterprises to operate, but also to innovate.

JG: Pat, I wanted to follow up on an original question from Brandon. I think the software-defined data center strategy has had some really important announcements that have moved that strategy along. But how are you measuring customer adoption of this? What are the benchmarks you have and can you tell us a little bit about what you’re seeing from customers on the uptake of the overall vision?

PG: Some of the public data that we talk about is on the earnings call, but I’ll expand from that just a little bit. Some of the things that we look at would be management adoption inside of the large footprint of vSphere customers, and what we’ve said on our earnings call so far is that we have 14% adoption. We’re also carefully monitoring how many of our customers are taking three or four of the legs of the software-defined data center. So 14% now take management and we’re now saying — how many of our customers take vSphere management and networking? vSphere management networking and storage? And that’s one of the metrics that we’re monitoring very closely.

So how many of those customers have we taken into full production using all legs of the software-defined data center. And obviously something like that starts as a trickle, turns into a stream, and finally it’s a full-blown river of adoption. We’re seeing all of the right trends with regard to NSX adoption, the storage adoption and the adoption of those in conjunction with each other. And that to us is when we have the full-meal deal.

Note: In the earnings call last week, VMware reported it has 400 paying NSX customers, up 60% quarter-over-quarter. NSX bookings doubled in the second half of 2014 compared to the first half and the product has over a $200 million annual booking run rate. VMware reported it had 1,000 paying customers using the VSAN storage platform.

BB: You’ve talked about NSX as a real differentiator for VMware. Do you get the sense that customers are ready to adopt that technology? And also, what would you say is the focus for NSX now? At VMworld it seemed like you were talking about NSX a lot more from a security standpoint compared to the software-defined networking standpoint that it had been defined as before. How do you define NSX with customers now and do you think they’re ready to adopt this cutting-edge technology?

PG: If there’s any doubt on that question, look at our earnings call and the adoption numbers we’re seeing, the momentum we’re seeing with customer pick-up, the revenue acceleration we’re seeing. So unquestionably, we’re crossing that point on the curve in adoption. The two primary use cases are application agility and micro-segmentation or security. Nominally they’re 50-50’ish for customers to date. And one is the fast road and one is the complete road. The fast road is micro-segmentation: You walk into the customer and you say – do you have any assets that are less protected than you’d like them to be? And if the CIO doesn’t say yes to that question, you know he’s not going to be there a long time anyway, right?

Everybody has their most critical assets that are the best protected, and with NSX you start to lay out how you can quickly bring micro-segmentation as an additional layer of protection into those environments. You don’t change the network architecture, you don’t even necessarily need to invite the network admin to the meeting. It’s a software overlay technology, you have the CISO and the vAdmin all on board very quickly. And after they’ve begun isolating some of their highest valued assets and getting some operational experience with it, then you would like the network admin because now we’re ready to have a conversation about how we fully deploy the value of network virtualization.

The other question is really one about transforming the network operations so that applications can be deployed with all of their incumbent firewall provisioning, routing, and rules in a fully automated way. That takes application deployment times from weeks to hours or minutes. Those are the transformational use cases that we’ve seen at places like eBay. And those are the two drivers. Both of those are going extremely well with customers. The reason we’ve ended up talking a lot more about micro-segmentation and security is it’s just so easy for customers to adopt it and deploy it in a very targeted and highly beneficial way.

JG: I wanted to follow up on NSX: In order to make this hybrid VMware vCloud Air service work, does that mean you’re working in conjunction with carrier partners and that they’re deploying NSX as well?

PG: There are multiple pieces to that. Does the customer have to deploy NSX on premise? Does the carrier have to deploy NSX? And is the cloud service deploying NSX? What we announced is that vCloud Air has now implemented NSX and is making those services available to customers. That was the key piece of the announcement.

From a service provider, from the network provider perspective, they don’t need to do anything, because it really is about getting my pipe connected up to vCloud Air across whatever network service I have. However, we’re increasingly finding those service providers enhancing their service offerings via NSX. They’re offering those as differentiated VPN services or MPLS connectivity for their enterprise customers. So they don’t have to, but increasingly they’re seeing that they can differentiate their service offerings to enterprise customers by adopting and deploying NSX as part of their network offering.

On the customer side, they don’t need to do anything other than access those services through standard protocols like OSPF and BGP and others. Now, if they have deployed NSX internally, there’s more elegant things that they can do with it, but it begins by a simple onramp, the standard protocols that they’re already deploying and using today.

JG: So a customer doesn’t have to commit to NSX, they can just take advantage of its benefits?

PG: Correct. Just access those services through standard network protocols and services that they’ve most likely already deployed and are highly mature on. Over time, we’ll do more if they have put NSX in place, but that’s round two of the discussion. Round one is — can I now start to view the vCloud Air service as a segment, a compatible extension of my data center, that’s entirely network compatible without modifying any of my security, firewall, rules, anything else? And that’s now this absolutely unique capability that we’re offering in the marketplace.

BB: I want to ask about EMC’s federation strategy. There’s been a lot of talk in the market about whether EMC might break up its federation of EMC storage, VMware, RSA, Pivotal and now VCE. Activist investors Elliott Management Group have been pushing for that. Where do you stand on that? Would you like VMware to spin out from EMC? And as a follow-up to that, there’s been some discussion about EMC Chairman Joe Tucci’s potential retirement. Would you ever want to replace Joe Tucci as chairman of the EMC federation?

PG: We’re very pleased that the truce was announced with Elliott and EMC, the agreement is in place and we’re happy with that. And the reason we’re happy is, as I’ve gone on record and said a number of times, we think that the federation model is the best model in this period of high tumult change, etc. in this phase of the industry.

We think being bigger and more strategic as a federation is an asset for the companies, for EMC as well as VMware as well as Pivotal, and we believe at this phase of our journey that it’s absolutely the best way to go and we expect that to continue for years to come. We do, in many, many circumstances, find that customers simply say — I want you guys to work together, partner together, deliver me more value together into my environment, and I want to view the federation as one company. And we are getting that strong response from customers and some of our biggest Q4 wins were a direct result of the federation partnership.

So we’re very comfortable and happy that this came together as it did and pleased that at least that’s been taken off the table for at least 2015. With regard to me personally, those decisions are made by the board, of course, and I’m thrilled and excited by what I’m doing at VMware and hope to do it for many years to come.

JG: Are there any other aspects of the announcement, or anything else that you would like to touch on or want readers to know about?

PG: We talked about the vSphere 6, which is a huge announcement. We didn’t spend a lot of time talking about virtual SAN and virtual volumes, the storage technologies, but we view those as very, very substantive technology improvements. I think you guys got it with respect to VIO, and we covered that pretty effectively. And then I’ll say there is this profound differentiation of the hybrid network, and taken together, SDDC is the foundation for one cloud, any app, any device. The components are in place, customer uptake is strong and we’ve got years of innovation in front of us that’s turning me and my engineers’ cranks every day.


Best Microsoft MCTS Certification, Microsoft MCP Training at certkingdom.com


Continue Reading

Follow Us

Bookmark and Share


Popular Posts