LPI Linux Certification in a Nutshell (21 page)

Read LPI Linux Certification in a Nutshell Online

Authors: Adam Haeder; Stephen Addison Schneiter; Bruno Gomes Pessanha; James Stanger

Tags: #Reference:Computers

BOOK: LPI Linux Certification in a Nutshell
9.24Mb size Format: txt, pdf, ePub
Name

renice

Syntax
renice [+|-]
nicenumber
[
option
]
targets
Description

Alter the
nicenumber
to set
the scheduling priority of one or more running
target
processes. By default,
renice
assumes that the
targets
are numeric PIDs. One or more
option
s may also be used to interpret
targets
as processes owned by specific
users.

Frequently used options
-u

Interpret
targets
as
usernames, affecting all processes owned by those
users.

-p

Interpret
targets
as PIDs
(the default).

Examples

This command will lower the priority of the process with PID
501 by increasing its nice number to the maximum:

$
renice 20 501

The following command can be used to increase the priority of
all of user
adamh
’s processes as well as the
process with PID 501:

#
renice -10 -u adamh -p 501

In this command,
-10
indicates a nice value of negative 10, thus giving PID 501 a higher
priority on the system. A dash isn’t used for the nice value,
because the dash could be confused for an option, such as
-u
.

On the Exam

Be sure to know the range and meaning of nice numbers and
how to change them for new and existing processes. Also note that
nice
and
renice
specify
their numbers differently. With
nice
, a
leading dash can indicate a nice number (e.g.,
-10
), including a negative one with a
second dash (e.g.,
--10
). On
the other hand,
renice
does not need the
dash.

You can renice processes interactively using
top
’s text interface by using the
single-keystroke
r
command. You will be
prompted for the PID of the process whose nice number you wish to
change and for the new nice number. If you are the superuser, you
can enter negative values. The new nice number will be displayed by
top
in the column labeled
NI
for the process you
specify.

Objective 7: Search Text Files Using Regular Expressions

Linux offers many tools for system administrators to use for
processing text. Many, such as
sed, awk
, and
perl
, are capable of automatically editing multiple
files, providing you with a wide range of text-processing capability. To
harness that capability, you need to be able to define and delineate
specific text segments from within files, text streams, and string
variables. Once the text you’re after is identified, you can use one of
these tools or languages to do useful things to it.

These tools and others understand a loosely defined pattern
language. The language and the patterns themselves are collectively called
regular expressions (often abbreviated just
regexp
or
regex
). Regular expressions are similar in concept to
file globs, but many more special characters exist for regular
expressions, extending the utility and capability of tools that understand
them.

Two tools that are important for the LPIC Level 1 exams and that
make use of regular expressions are
grep
and
sed
. These tools are useful for text
searches. There are many other tools that make use of regular expressions,
including the
awk
, Perl, and Python languages and
other utilities, but you don’t need to be concerned with them for the
purpose of the LPIC Level 1 exams.

Regular expressions are the topic of entire books, such as
Mastering Regular
Expressions
(O’Reilly). Exam 101 requires the use of
simple regular expressions and related tools, specifically to perform
searches from text sources. This section covers only the basics of regular
expressions, but it goes without saying that their power warrants a full
understanding. Digging deeper into the regular expression world is highly
recommended in your quest to become an accomplished Linux system
administrator.

Regular Expression Syntax

It would not be unreasonable to assume that some
specification defines how regular expressions are constructed.
Unfortunately, there isn’t one. Regular expressions have been
incorporated as a feature in a number of tools over the years, with
varying degrees of consistency and completeness. The result is a
cart-before-the-horse scenario, in which utilities and languages have
defined their own flavor of regular expression syntax, each with its own
extensions and idiosyncrasies. Formally defining the regular expression
syntax came later, as did efforts to make it more consistent. Regular
expressions are defined by arranging strings of text, or
patterns
. Those patterns are composed
of two types of characters,
literals
(plain text or literal text)
and
metacharacters
.

Like the special file
globbing
characters, regular
expression metacharacters take on a special meaning in the context of
the tool in which they’re used. There are a few metacharacters that are
generally thought of to be among the “extended set” of metacharacters,
specifically those introduced into
egrep
after
grep
was created.

The
egrep
command on Linux systems is simply
a wrapper that runs
grep -E
, informing
grep
to use its extended regular expression
capabilities instead of the basic ones. Examples of metacharacters
include the
^
symbol, which means
“the beginning of a line,” and the
$
symbol, which means “the end of a line.” A complete listing of
metacharacters follows in Tables
6-8
through
6-11
.

Note

The
backslash character (
\
) turns off (escapes) the special meaning
of the character that follows, turning metacharacters into literals.
For nonmetacharacters, it often turns on some special meaning.

Table 6-8. Regular expression position anchors

Regular
expression

Description

^

Match at the beginning of a line. This
interpretation makes sense only when the
^
character is at the lefthand side of
the
regex
.

$

Match at the end of a line. This interpretation
makes sense only when the
$
character is at the righthand side of the
regex
.

\<\>

Match word boundaries. Word boundaries are defined
as whitespace, the start of line, the end of line, or
punctuation marks. The backslashes are required and enable this
interpretation of
<
and
>
.

Table 6-9. Regular expression POSIX character classes

Character class

Description

[:alnum:]

Alphanumeric
[a-zA-Z0-9]

[:alpha:]

Alphabetic [a-zA-Z]

[:blank:]

Spaces or Tabs

[:cntrl:]

Control characters

[:digit:]

Numeric digits [0-9]

[:graph:]

Any visible characters

[:lower:]

Lowercase [a-z]

[:print:]

Noncontrol characters

[:punct:]

Punctuation characters

[:space:]

Whitespace

[:upper:]

Uppercase [A-Z]

[:xdigit:]

Hex digits [0-9a-fA-F]

Table 6-10. Regular expression character sets

Regular expression

Description

[abc][a-z]

Single-character groups and ranges. In
the first form, match any single character from among the
enclosed characters
a
,
b
, or
c
. In the second form, match any
single character from among the range of characters bounded by
a
and
z
(POSIX character classes can also be
used, so
[a-z]
can be
replaced with
[[:lower:]]
).
The brackets are for grouping only and are not
matched themselves
.

[^abc][^a-z]

Inverse match. Match any single
character not among the enclosed characters
a
,
b
, and
c
or in the range
a-z
. Be careful not to confuse this
inversion with the anchor character
^
, described earlier.

.

Match any single character except a
newline.

Table 6-11. Regular expression modifiers

Basic regular
expression

Extended
regular
expression
(egrep)

Description

*
*

Match an unknown number (zero or more) of the
single character (or single-character
regex
) that precedes
it.

\?
?

Match zero or one instance of the preceding
regex
.

\+
+

Match one or more instances of the preceding
regex
.

\{
n
,
m
\}
{
n
,
m
}

Match a range of occurrences of the
single character or
regex
that
precedes this construct.
\{
n
\}
matches
n
occurrences,
\{
n
,\}
matches at least
n
occurrences, and
\{
n
,
m
\}
matches any number of occurrences
from
n
to
m
, inclusively.

\|
|

Alternation. Match either the
regex
specified before or after the
vertical bar.

\(
regex
\)
(
regex
)

Grouping. Matches
regex
, but it can be modified as a
whole and used in back-references. (
\1
expands to the contents of the
first
\(\)
, and so on, up to
\9
.)

It is often helpful to consider regular expressions as their own
language, where literal text acts as words and phrases. The “grammar” of
the language is defined by the use of metacharacters. The two are
combined according to specific rules (which, as mentioned earlier, may
differ slightly among various tools) to communicate ideas and get real
work done. When you construct regular expressions, you use
metacharacters and literals to specify three basic ideas about your
input text:

Position anchors

A position anchor is used to specify the position of
one or more character sets in relation to the entire line of text
(such as the beginning of a line).

Character sets

A character set matches text. It could be a series
of literals, metacharacters that match individual or multiple
characters, or combinations of these.

Quantity modifiers

Quantity modifiers follow a character set and
indicate the number of times the set should be repeated.

Using grep

A long time ago, as the idea of regular expressions was
catching on, the line editor
ed
contained a command to display lines of a file being edited that matched
a given regular expression. The command is:

g/
regular expression
/p

That is, “on a global basis, print the current line when a match
for
regular expression
is found,” or more
simply, “global
regular expression
print.” This
function was so useful that it was made into a standalone utility named,
appropriately,
grep
. Later, the regular expression
grammar of
grep
was expanded in a new command
called
egrep
(for “extended
grep
”). You’ll find both commands on your Linux
system today, and they differ slightly in the way they handle regular
expressions. For the purposes of Exam 101, we’ll stick with
grep
, which can also make use of the “extended”
regular expressions when used with the
-E
option.
You will find some form of
grep
on just about every
Unix or Unix-like system available.

Using sed

sed
, the
stream
editor
, is a powerful filtering program found on nearly every
Unix system. The
sed
utility is usually used either
to automate repetitive editing tasks or to process text in pipes of Unix
commands (see
“Objective 4: Use
Streams, Pipes, and Redirects,”
earlier in this chapter). The
scripts that
sed
executes can be single commands or
more complex lists of editing instructions.

Examples

Now that the gory details are out of the way, here are some
examples of simple regular expression usage that you may find
useful.

Other books

Rogue's Angel (Rogue Series) by Surdare, Farita
65 Short Stories by W. Somerset Maugham
The Call of the Thunder Dragon by Michael J Wormald