Read Windows Server 2008 R2 Unleashed Online
Authors: Noel Morimoto
1388
CHAPTER 33
Logging and Debugging
have been put in place to accomplish a specific action but are no longer needed or
will never be repeated. Note that a trigger must contain an expiration task.
.
If the Task Is Already Running: Do Not Start a New Instance—
The task will
not start a new instance if an instance of the task is already running.
.
If the Task Is Already Running: Run a New Instance in Parallel—
A new task
will run in parallel if one instance is running and the triggers and conditions cause
the task to be triggered again.
.
If the Task Is Already Running: Queue a New Instance—
A new task will queue,
but it will not start until the first instance is complete and will not stop the instance
that is already running.
.
If the Task Is Already Running: Stop the Existing Instance—
A new task is trig-
gered and conditions specified in the task will first stop the current instance and
then start a new instance of the task.
Understanding Task History
The History tab on the properties page for a task contains events filtered from the
Operational events for the Task Scheduler in the Event Viewer and enables an administra-
ptg
tor to see success and failures for any given task without having to review all task-related
event information for a system or collection of systems.
NOTE
Although the Task Scheduler enables an administrator to create folders for organizing
tasks and new tasks can be given meaningful names, after a folder or task is created,
it cannot be renamed. Further, tasks cannot be moved from one folder to another.
However, tasks can be exported and then imported into a new folder or another system.
Logging and debugging tools help administrators monitor, manage, and problem solve
errors on a Windows Server 2008 R2 system and infrastructure. Many of the tools used to
identify system problems in a Windows Server 2008 R2 environment have been improved
from previous versions of the applications in earlier releases of the Windows operating
system. In addition, new tools have been introduced to enhance the administration
logging and debugging experience. Key to problem solving is enabling logging and moni-
toring the logs to identify errors, research the errors, and perform system recovery based
on problem resolution.
In addition to the tools and utilities that come with the Windows Server 2008 R2 environ-
ment are resources such as the Microsoft TechNet database (www.microsoft.com/technet/).
Best Practices
1389
Between utility and tool improvements as well as online technical research databases,
problem solving can be simplified in a Windows Server 2008 R2 infrastructure.
The following are best practices from this chapter:
. Use the Task Manager to provide an instant view of system resources, such as proces-
sor activity, process activity, memory usage, and resource consumption.
33
. Use Event Viewer to check whether Windows Server 2008 R2 is experiencing problems.
. To mitigate configuration issues, server roles should be scanned with the Best
Practices Analyzer tool on a regular basis.
. Use filters, grouping, and sorting to help isolate and identify key events.
. Create custom filters to expedite problem identification and improve monitoring
processes.
. Create alerts using triggers and actions to identify issues quickly.
. Archive security logs to a central location on your network and then review them
ptg
periodically against local security logs.
. Use subscriptions to consolidate logs from multiple systems to ensure that problems
are identified quickly.
. Set an auditing policy to shut down the server immediately when the security log is
full. This prevents generated logs from being overwritten or old logs from being erased.
. Establish a process for monitoring and analyzing system performance to promote
maximum uptime and to meet service-level agreements.
. Run System Monitor from a remote computer to monitor servers.
. Use logging when monitoring a larger number of servers.
. Establish performance baselines.
. Create logging jobs based on established baselines to ensure performance data is
captured during times when the system is having resource issues and to facilitate
altering for proactive system management.
. Create new baselines as applications or new services are added to a server.
. Consider reducing the frequency of data collection to reduce the amount of data
that must be collected and analyzed.
. Use logs to capture performance data.
. Use the Reliability Monitor to identify a timeline of system degradation to facilitate
expeditious investigation of root issue causes.
. Use the Memory Diagnostics Tool to facilitate hardware troubleshooting.
This page intentionally left blank
ptg
IN THIS CHAPTER
Capacity Analysis
. Defining Capacity Analysis
. Using Capacity-Analysis Tools
and Performance
. Monitoring System Performance
Optimization
. Optimizing Performance by
Server Roles
Capacity analysis and performance optimization is a criti-
cal part of deploying or migrating to Windows Server 2008
R2. Capacity analysis and performance optimization ensures
that resources and applications are available, uptime is
maximized, and systems scale well to meet the growing
demands of business. The release of Windows Server 2008
R2 includes some new and some refreshed tools to assist IT
administrators and staff with properly assessing server
capacity and performance—before and after Windows
ptg
Server 2008 R2 is deployed on the network. If you invest
time in these processes, you will spend less time trou-
bleshooting or putting out fires, thus making your life less
stressful and also reducing business costs.
The majority of capacity analysis is working to minimize
unknown or immeasurable variables, such as the number of
gigabytes or terabytes of storage the system will need in the
next few months or years, to adequately size a system. The
high number of unknown variables is largely because
network environments, business policy, and people are
constantly changing. As a result, capacity analysis is an art
as much as it involves experience and insight.
If you’ve ever found yourself having to specify configura-
tion requirements for a new server or having to estimate
whether your configuration will have enough power to
sustain various workloads now and in the foreseeable
future, proper capacity analysis can help in the design and
configuration. These capacity-analysis processes help weed
out the unknowns and assist you while making decisions as
1392
CHAPTER 34
Capacity Analysis and Performance Optimization
accurately as possible. They do so by giving you a greater understanding of your Windows
Server 2008 R2 environment. This knowledge and understanding can then be used to
reduce time and costs associated with supporting and designing an infrastructure. The
result is that you gain more control over the environment, reduce maintenance and
support costs, minimize firefighting, and make more efficient use of your time.
Business depends on network systems for a variety of different operations, such as
performing transactions or providing security, so that the business functions as efficiently
as possible. Systems that are underutilized are probably wasting money and are of little
value. On the other hand, systems that are overworked or can’t handle workloads prevent
the business from completing tasks or transactions in a timely manner, might cause a loss
of opportunity, or keep the users from being productive. Either way, these systems are
typically not much benefit to operating a business. To keep network systems well tuned
for the given workloads, capacity analysis seeks a balance between the resources available
and the workload required of the resources. The balance provides just the right amount of
computing power for given and anticipated workloads.
This concept of balancing resources extends beyond the technical details of server configu-
ration to include issues such as gauging the number of administrators that might be
needed to maintain various systems in your environment. Many of these questions relate
to capacity analysis, and the answers aren’t readily known because they can’t be predicted
ptg
with complete accuracy.
To lessen the burden and dispel some of the mysteries of estimating resource require-
ments, capacity analysis provides the processes to guide you. These processes include
vendor guidelines, industry benchmarks, analysis of present system resource utilization,
and more. Through these processes, you’ll gain as much understanding as possible of the
network environment and step away from the compartmentalized or limited understand-
ing of the systems. In turn, you’ll also gain more control over the systems and increase
your chances of successfully maintaining the reliability, serviceability, and availability of
your system.
There is no set or formal way to start your capacity-analysis processes. However, a proven
and effective means to begin to proactively manage your system is to first establish
systemwide policies and procedures. Policies and procedures, discussed shortly, help shape
service levels and users’ expectations. After these policies and procedures are classified and
defined, you can more easily start characterizing system workloads, which will help gauge
acceptable baseline performance values.
The Benefits of Capacity Analysis and Performance Optimization
The benefits of capacity analysis and performance optimization are almost inconceivable.
Capacity analysis helps define and gauge overall system health by establishing baseline
performance values, and then the analysis provides valuable insight into where the system
is heading. Continuous performance monitoring and optimization will ensure systems are
stable and perform well, reducing support calls from end users, which, in turn, reduces
costs to the organization and helps employees be more productive. It can be used to
Defining Capacity Analysis
1393
uncover both current and potential bottlenecks and can also reveal how changing
management activities can affect performance today and tomorrow.
Another benefit of capacity analysis is that it can be applied to small environments and
scale well into enterprise-level systems. The level of effort needed to initially drive the
capacity-analysis processes will vary depending on the size of your environment, geogra-
phy, and political divisions. With a little up-front effort, you’ll save time, expense, and
gain a wealth of knowledge and control over the network environment.
Establishing Policy and Metric Baselines
As mentioned earlier, it is recommended that you first begin defining policies and proce-
dures regarding service levels and objectives. Because each environment varies in design,
you can’t create cookie-cutter policies—you need to tailor them to your particular business
practices and to the environment. In addition, you should strive to set policies that set
34
user expectations and, more important, help winnow out empirical data.
Essentially, policies and procedures define how the system is supposed to be used—estab-
lishing guidelines to help users understand that the system can’t be used in any way they
see fit. Many benefits are derived from these policies and procedures. For example, in an
environment where policies and procedures are working successfully and where network
performance becomes sluggish, it would be safe to assume that groups of people weren’t
ptg
playing a multiuser network game, that several individuals weren’t sending enormous
email attachments to everyone in the Global Address List, or that a rogue web or FTP
server wasn’t placed on the network.
The network environment is shaped by the business more so than the IT department.
Therefore, it’s equally important to gain an understanding of users’ expectations and
requirements through interviews, questionnaires, surveys, and more. Some examples of
policies and procedures that you can implement in your environment pertaining to end
users could be the following:
. Email message size, including attachments can’t exceed 10MB.
. SQL Server databases settings will be enforced with Policy Based Management.
. Beta software, freeware, and shareware can be installed only on test equipment (that
is, not on client machines or servers in the production environment).
. Specify what software is allowed to run on a user’s PC through centrally managed
but flexible group policies.
. All computing resources are for business use only (in other words, no gaming or
personal use of computers is allowed).
. Only business-related and approved applications will be supported and allowed on
the network.