In order to help you more Pass4itsure the Lpi 101-400 exam eliminate tension of the candidates on the Internet. 101-400 Actual Test including the official Lpi 101-400 Actual Test courses, Lpi 101-400 exam self-paced training guide, 101-400 Actual Test Pass4itsure and practice, 101-400 exam 101-400 Actual Test. 101-400 exam simulation training package designed by ITCertKing can help you effortlessly pass the exam. Do not spend too much time and money, as long as you have Pass4itsure learning materials you will easily pass the exam.
Exam Code: 101-400
Exam Name: LPI Level 1 Exam 101, Junior Level Linux Certification, Part 1 of 2
Updated: Mar 31, 2017
Lpi 101-400 exam Tests authentication certificate is the dream IT certificate of many people. Lpi certification 101-400 exam Tests is a examination to test the examinees’ IT professional knowledge and experience, which need to master abundant IT knowledge and experience to pass. In order to grasp so much knowledge, generally, it need to spend a lot of time and energy to review many books. Pass4itsure is a website which can help you save time and energy to rapidly and efficiently master the Lpi certification 101-400 exam Tests related knowledge. If you are interested in Pass4itsure, you can first free download part of Pass4itsure’s Lpi certification 101-400 exam Tests exercises and answers on the Internet as a try.
Share some Lpi Specialist 101-400 Exam Questions and Answers Below:
Question No : 8 – (Topic 1) Which statement is true regarding the UDP checksum?
A. It is used for congestion control.
B. It cannot be all zeros.
C. It is used by some Internet worms to hide their propagation.
D. It is computed based on the IP pseudo-header.
The method used to compute the checksum is defined in RFC 768:
“Checksum is the 16-bit one’s complement of the one’s complement sum of a pseudo
header of information from the IP header, the UDP header, and the data, padded with zero
octets at the end (if necessary) to make a multiple of two octets.”
In other words, all 16-bit words are summed using one’s complement arithmetic. Add the
16-bit values up. Each time a carry-out (17th bit) is produced, swing that bit around and
add it back into the least significant bit. The sum is then one’s complemented to yield the
value of the UDP checksum field.
If the checksum calculation results in the value zero (all 16 bits 0) it should be sent as the
one’s complement (all 1s).
Question No : 9 – (Topic 1)
Which two Cisco Express Forwarding tables are located in the data plane? (Choose two.)
A. the forwarding information base
B. the label forwarding information base
C. the IP routing table
D. the label information table
E. the adjacency table
The control plane runs protocols such as OSPF, BGP, STP, LDP. These protocols are
needed so that routers and switches know how to forward packets and frames.
The data plane is where the actual forwarding takes place. The data plane is populated
based on the protocols running in the control plane. The Forwarding Information Base (FIB)
is used for IP traffic and the Label FIB is used for MPLS.
Question No : 10 – (Topic 1)
Which two mechanisms can be used to eliminate Cisco Express Forwarding polarization?
A. alternating cost links
B. the unique-ID/universal-ID algorithm
C. Cisco Express Forwarding antipolarization
D. different hashing inputs at each layer of the network
This document describes how Cisco Express Forwarding (CEF) polarization can cause
suboptimal use of redundant paths to a destination network. CEF polarization is the effect
when a hash algorithm chooses a particular path and the redundant paths remain
How to Avoid CEF Polarization
Alternate between default (SIP and DIP) and full (SIP + DIP + Layer4 ports)
hashing inputs configuration at each layer of the network.
Alternate between an even and odd number of ECMP links at each layer of the
network.The CEF load-balancing does not depend on how the protocol routes are inserted in the routing table. Therefore, the OSPF routes exhibit the same behavior
as EIGRP. In a hierarchical network where there are several routers that perform
load-sharing in a row, they all use same algorithm to load-share.
The hash algorithm load-balances this way by default:
The number before the colon represents the number of equal-cost paths. The number after
the colon represents the proportion of traffic which is forwarded per path.
This means that:
For two equal cost paths, load-sharing is 46.666%-53.333%, not 50%-50%.
For three equal cost paths, load-sharing is 33.33%-33.33%-33.33% (as expected).
For four equal cost paths, load-sharing is 20%-20%-20%-40% and not 25%-25%-
This illustrates that, when there is even number of ECMP links, the traffic is not loadbalanced.
Cisco IOS introduced a concept called unique-ID/universal-ID which helps avoid
CEF polarization. This algorithm, called the universal algorithm (the default in
current Cisco IOS versions), adds a 32-bit router-specific value to the hash
function (called the universal ID – this is a randomly generated value at the time of
the switch boot up that can can be manually controlled). This seeds the hash
function on each router with a unique ID, which ensures that the same
source/destination pair hash into a different value on different routers along the
path. This process provides a better network-wide load-sharing and circumvents
the polarization issue. This unique -ID concept does not work for an even number
of equal-cost paths due to a hardware limitation, but it works perfectly for an odd
number of equal-cost paths. In order to overcome this problem, Cisco IOS adds
one link to the hardware adjacency table when there is an even number of equalcost paths in order to make the system believe that there is an odd number of
Question No : 11 DRAG DROP – (Topic 1)
Drag and drop the argument of the mls ip cef load-sharing command on the left to the
function it performs on the right.
Question No : 12 – (Topic 1) Refer to the exhibit.
Which two are causes of output queue drops on FastEthernet0/0? (Choose two.)
A. an oversubscribed input service policy on FastEthernet0/0
B. a duplex mismatch on FastEthernet0/0
C. a bad cable connected to FastEthernet0/0
D. an oversubscribed output service policy on FastEthernet0/0
E. The router trying to send more than 100 Mb/s out of FastEthernet0/0
Output drops are caused by a congested interface. For example, the traffic rate on the
outgoing interface cannot accept all packets that should be sent out, or a service policy is
applied that is oversubscribed. The ultimate solution to resolve the problem is to increase
the line speed. However, there are ways to prevent, decrease, or control output drops
when you do not want to increase the line speed. You can prevent output drops only if
output drops are a consequence of short bursts of data. If output drops are caused by a
constant high-rate flow, you cannot prevent the drops. However, you can control them.
Question No : 13 – (Topic 1)
Which two mechanisms provide Cisco IOS XE Software with control plane and data plane
separation? (Choose two.)
A. Forwarding and Feature Manager
B. Forwarding Engine Driver
C. Forwarding Performance Management
D. Forwarding Information Base
Control Plane and Data Plane Separation
IOS XE introduces an opportunity to enable teams to now build drivers for new Data Plane
ASICs outside the IOS instance and have them program to a set of standard APIs which in
turn enforces Control Plane and Data Plane processing separation.
IOS XE accomplishes Control Plane / Data Plane separation through the introduction of the
Forwarding and Feature Manager (FFM) and its standard interface to the Forwarding
Engine Driver (FED). FFM provides a set of APIs to Control Plane processes. In turn, the
FFM programs the Data Plane via the FED and maintains forwarding state for the system.
The FED is the instantiation of the hardware driver for the Data Plane and is provided by
Question No : 14 – (Topic 1)
What is Nagle’s algorithm used for?
A. To increase the latency
B. To calculate the best path in distance vector routing protocols
C. To calculate the best path in link state routing protocols
D. To resolve issues caused by poorly implemented TCP flow control.
Silly window syndrome is a problem in computer networking caused by poorly implemented
TCP flow control. A serious problem can arise in the sliding window operation when the
sending application program creates data slowly, the receiving application program
consumes data slowly, or both. If a server with this problem is unable to process all
incoming data, it requests that its clients reduce the amount of data they send at a time (the
window setting on a TCP packet). If the server continues to be unable to process all
incoming data, the window becomes smaller and smaller, sometimes to the point that the
data transmitted is smaller than the packet header, making data transmission extremely
inefficient. The name of this problem is due to the window size shrinking to a “silly” value.
When there is no synchronization between the sender and receiver regarding capacity of
the flow of data or the size of the packet, the window syndrome problem is created. When
the silly window syndrome is created by the sender, Nagle’s algorithm is used. Nagle’s
solution requires that the sender sends the first segment even if it is a small one, then that
it waits until an ACK is received or a maximum sized segment (MSS) is accumulated.
Question No : 15 – (Topic 1)
Which technology can create a filter for an embedded packet capture?
A. Control plane policing
B. Access lists
D. Traffic shaping
A filter can be applied to limit the capture to desired traffic. Define an Access Control List
(ACL) within config mode and apply the filter to the buffer:
ip access-list extended BUF-FILTER
permit ip host 192.168.1.1 host 172.16.1.1 permit ip host 172.16.1.1 host 192.168.1.1
monitor capture buffer BUF filter access-list BUF-FILTER
Question No : 16 – (Topic 1)
Which statement about MSS is true?
A. It is negotiated between sender and receiver.
B. It is sent in all TCP packets.
C. It is 20 bytes lower than MTU by default.
D. It is sent in SYN packets.
E. It is 28 bytes lower than MTU by default.
The maximum segment size (MSS) is a parameter of the Options field of the TCP header
that specifies the largest amount of data, specified in octets, that a computer or
communications device can receive in a single TCP segment. It does not count the TCP
header or the IP header. The IP datagram containing a TCP segment may be self
contained within a single packet, or it may be reconstructed from several fragmented
pieces; either way, the MSS limit applies to the total amount of data contained in the final,
reconstructed TCP segment.
The default TCP Maximum Segment Size is 536. Where a host wishes to set the maximum
segment size to a value other than the default, the maximum segment size is specified as a
TCP option, initially in the TCP SYN packet during the TCP handshake. The value cannot
be changed after the connection is established.
Sharing your achievement with your professional network or potential employers Continuing your Linux studies, to add even more skills to your portfolio Upgrading your certification to the next level, to keep your 101-400 exam certifications active longer Getting involved in our social community of Linux professionals
Related More Official Informations:https://www.lpi.org/our-certifications/exam-101-objectives