Modern windows operating systems

Computer Basics —

Understanding Operating Systems

Computer Basics: Understanding Operating Systems

Lesson 8: Understanding Operating Systems

What is an operating system?

An operating system is the most important software that runs on a computer. It manages the computer’s memory and processes, as well as all of its software and hardware. It also allows you to communicate with the computer without knowing how to speak the computer’s language. Without an operating system, a computer is useless.

Watch the video below to learn more about operating systems.

Looking for the old version of this video? You can still view it here.

The operating system’s job

Your computer’s operating system (OS) manages all of the software and hardware on the computer. Most of the time, there are several different computer programs running at the same time, and they all need to access your computer’s central processing unit (CPU), memory, and storage. The operating system coordinates all of this to make sure each program gets what it needs.

Types of operating systems

Operating systems usually come pre-loaded on any computer you buy. Most people use the operating system that comes with their computer, but it’s possible to upgrade or even change operating systems. The three most common operating systems for personal computers are Microsoft Windows, macOS, and Linux.

Modern operating systems use a graphical user interface, or GUI (pronounced gooey). A GUI lets you use your mouse to click icons, buttons, and menus, and everything is clearly displayed on the screen using a combination of graphics and text.

Each operating system’s GUI has a different look and feel, so if you switch to a different operating system it may seem unfamiliar at first. However, modern operating systems are designed to be easy to use, and most of the basic principles are the same.

Microsoft Windows

Microsoft created the Windows operating system in the mid-1980s. There have been many different versions of Windows, but the most recent ones are Windows 10 (released in 2015), Windows 8 (2012), Windows 7 (2009), and Windows Vista (2007). Windows comes pre-loaded on most new PCs, which helps to make it the most popular operating system in the world.

Check out our tutorials on Windows Basics and specific Windows versions for more information.

macOS

macOS (previously called OS X) is a line of operating systems created by Apple. It comes preloaded on all Macintosh computers, or Macs. Some of the specific versions include Mojave (released in 2018), High Sierra (2017), and Sierra (2016).

According to StatCounter Global Stats, macOS users account for less than 10% of global operating systems—much lower than the percentage of Windows users (more than 80%). One reason for this is that Apple computers tend to be more expensive. However, many people do prefer the look and feel of macOS over Windows.

Check out our macOS Basics tutorial for more information.

Linux

Linux (pronounced LINN-ux) is a family of open-source operating systems, which means they can be modified and distributed by anyone around the world. This is different from proprietary software like Windows, which can only be modified by the company that owns it. The advantages of Linux are that it is free, and there are many different distributions—or versions—you can choose from.

According to StatCounter Global Stats, Linux users account for less than 2% of global operating systems. However, most servers run Linux because it’s relatively easy to customize.

To learn more about different distributions of Linux, visit the Ubuntu, Linux Mint, and Fedora websites, or refer to our Linux Resources. For a more comprehensive list, you can visit MakeUseOf’s list of The Best Linux Distributions.

Operating systems for mobile devices

The operating systems we’ve been talking about so far were designed to run on desktop and laptop computers. Mobile devices such as phones, tablet computers, and MP3 players are different from desktop and laptop computers, so they run operating systems that are designed specifically for mobile devices. Examples of mobile operating systems include Apple iOS and Google Android . In the screenshot below, you can see iOS running on an iPad.

Operating systems for mobile devices generally aren’t as fully featured as those made for desktop and laptop computers, and they aren’t able to run all of the same software. However, you can still do a lot of things with them, like watch movies, browse the Web, manage your calendar, and play games.

To learn more about mobile operating systems, check out our Mobile Devices tutorials.

Modern Operating System

Download as PDF

About this page

Stefan Edelkamp , Stefan Schrödl , in Heuristic Search , 2012

8.1 Virtual Memory Management

Modern operating systems provide a general-purpose mechanism for processing data larger than available main memory called virtual memory. Transparent to the program, swapping moves parts of the data back and forth from disk as needed. Usually, the virtual address space is divided into units calledpages; the corresponding equal-size units in physical memory are called page frames. A page table maps the virtual addresses on the page frames and keeps track of their status (loaded/absent). When a page fault occurs (i.e., a program tries to use an unmapped page), the CPU is interrupted; the operating system picks a rarely picked page frame and writes its contents back to the disk. It then fetches the referenced page into the page frame just freed, changes the map, and restarts the trapped instruction. In modern computers memory management is implemented on hardware with a page size commonly fixed at 4,096 bytes.

Various paging strategies have been explored that aim at minimizing page faults. Belady has shown that an optimal offline page exchange strategy deletes the page that will not be used for a long time. Unfortunately, the system, unlike possibly the application program itself, cannot know this in advance. Several different online algorithms for the paging problem have been proposed, such as last-in-first-out (LIFO), first-in-first-out (FIFO), least-recently-used (LRU), and least-frequently-used (LFU). Despite that Sleator and Tarjan proved that LRU is the best general online algorithm for the problem, we reduce the number of page faults by designing data structures that exhibit memory locality, such that successive operations tend to access nearby memory cells.

Sometimes it is even desirable to have explicit control of secondary memory manipulations. For example, fetching data structures larger than the system page size may require multiple disk operations. A file buffer can be regarded as a kind of software paging that mimics swapping on a coarser level of granularity. Generally, an application can outperform the operating system’s memory management because it is well informed to predict future memory access.

Particularly for search algorithms, system paging often becomes the major bottleneck. This problem has been experienced when applying A* to the domain of route planning. Moreover, A* does not respect memory locality at all; it explores nodes in the strict order of f-values, regardless of their neighborhood, and hence jumps back and forth in a spatially unrelated way.

Data Management

Memory management

Modern operating systems provide the abstraction of virtual memory to user processes ( Peter Denning—Virtual Memory, 1970 ). Virtual memory hides the true storage medium and makes data byte addressable regardless of where it actually resides. Operating systems provide each process a separate virtual memory address space, allowing them to execute with the entire virtual address space at their disposal. The most important aspect of virtual memory for this discussion is that it allows a process to execute without the need to have all of its code and data resident in the CPU main memory (i.e., DRAM).

The virtual address space of a process is divided into fixed-size blocks, called pages. In the physical memory system, the physical address space (the range of actual memory locations) is likewise divided into equally sized frames so that a frame is capable of storing a page. Virtual pages can be mapped to any frame in main memory, mapped to a location on disk, or not yet be allocated. However, the CPU requires a page to be in a main memory frame when it is being accessed or executed. When a process executes an instruction using a virtual memory address, a hardware unit called the Memory Management Unit (MMU) intervenes and provides the mapping of the virtual address to the physical address. If the physical address of a page is not in main memory, a page fault occurs, and the process is suspended while the page is retrieved and a virtual-to-physical mapping is created. This technique is known as demand paging and is completely transparent to the user process (except for the time it takes to service the page fault). Figure 7.1 shows an example of demand paging.

Читайте также:  Топ линуксов для нетбука

Figure 7.1 . An illustration of demand paging for two user processes.

Virtual memory has implications on data transfer performance in OpenCL, since transferring data from the CPU to the GPU when using a discrete GPU uses Direct Memory Access (DMA) over the PCI-Express bus. DMA is an efficient way to access data directly from a peripheral device without CPU intervention. DMA requires that the data is resident in main memory and will not be moved by the operating system. When the operating system does not have the discretion to move a page, the page is said to be pinned (or page-locked).

The PCI-Express protocol allows any device connected to the bus, such as a GPU, to transfer data to or from the CPU’s main memory. When performing DMA transfers, a device driver running on the CPU supplies a physical address, and the DMA engine on the GPU can then perform the transfer and signal to the CPU when it has completed. Once the transfer completes, the pages can then be unmapped from memory.

Modern x86 systems use an I/O Memory Management Unit (IOMMU) as an interface between the PCI-Express bus and the main memory bus ( AMD IOMMU Architectural Specification; Intel Virtualization Technology for Directed I/O Architecture Specification ). The IOMMU performs the same role for peripheral devices as the MMU does for x86 cores, mapping virtual I/O addresses to physical addresses. The major benefit of utilizing an IOMMU for a GPU is that it allows the device to perform DMA transfers from noncontiguous physical address locations and allows access to physical locations that may be out of the range of addresses supported by the device. A block diagram of system with an IOMMU is shown in Figure 7.2 .

Figure 7.2 . A system containing an IOMMU.

VPN Theory and Usage

L2TP VPNs

NetScreen appliances support the Layer 2 Tunnel Protocol, or L2TP for short, when operating in Layer 3 mode. The L2TP protocol works by sending PPP (Point-to-Point Protocol) frames through a tunnel between the LNS and the L2TP access concentrator. Originally,

L2TP was designed so that a dial-up user could make a virtual PPP connection through an L2TP access concentrator (LAC) at an ISP. The LAC at the ISP would create a tunnel to the L2TP network server at either another ISP, or at a corporate network. The L2TP tunnel never actually extended to the client’s desktop, only to the ISP’s LAC.

L2TP tunnels are not encrypted, so they are not actually true VPN tunnels. The primary purpose for L2TP is that a dial-up user can be assigned an IP address that is known and can be referenced in policies. To encrypt an L2TP tunnel, you need to use an encryption scheme such as IPSec. Generally, this is referred to as L2TP-over-IPSec. L2TP-over-IPSec requires two things: IPSec and L2TP tunnels to be set up with the same endpoints and then linked together in a policy, and the IPSec tunnel must be in transport mode.

Modern operating systems , such as Windows XP, can alone act as an LAC, so that an L2TP tunnel can extend all the way to the desktop. NetScreen devices can act as LNS servers, so an L2TP VPN can easily be created between a NetScreen appliance and a Windows 2000 desktop, provided you don’t mind tweaking your registry a bit. To use L2TP without IPSec, change the value of the registry key (or create if one does not exist) ProhibitIPSec at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\RasMan\Parameters to hexadecimal 1 and reboot.

The NetScreen device does need to be configured with a group of IP addresses to assign to the L2TP clients, and these IP addresses must differ from the subnet in use on the LAN. For example, if your LAN address range is 10.0.0.0/24, then you would need to use something outside this range, such as 10.0.1.0 or 10.0.2.0. Note that you can use private address ranges that are not routable on the Internet. When the client connects to the NetScreen appliance, it is assigned an IP address for the L2TP tunnel, as well as DNS (Domain Name Service) and WINS (Windows Internet Naming Service) servers if applicable. The NetScreen appliance can also perform PPP authentication for the client through RADIUS, LDAP, SecurID, or its own internal database. NetScreen appliances support the use of Challenge Handshake Authentication Protocol (CHAP) with RADIUS and its internal database. NetScreen appliances also support Password Authentication Protocol (PAP) with RADIUS, LDAP, SecurID, and its internal database.

Technology Overview: Computer Basics

Ted Fair , . Technical Editor , in Cyber Spying , 2005

Users

Most modern operating systems are designed to accommodate multiple users, and some allow multiple simultaneous users. Users are the people who use a computer system. They generally have a name and a password. Computers keep track of user’s names and passwords and then assign them “property.” This property consists of a user’s files, lists of the different programs they can run, things they are allowed to do, and lists of the different user’s configurations for shared programs. Most systems also a have a “superuser,” the primary user who has control over the system. On Linux the superuser is named “root,” and on Windows the account is named “Administrator.” When spying on a computer it is very important to know which user you are and which you are after, because it will affect what you are allowed to do.

Operating Systems

4.2.3 System Calls

In modern operating systems , applications are separated from the operating system itself. The operating system code runs in a privileged processor mode known as kernel mode and has access to system data and hardware. Applications run in a nonprivileged processor mode are known as user mode and have limited access to system data and hardware by making system calls, which are actually a set of tightly controlled application programming interfaces (APIs).

Corresponding to each system call is a library procedure that user programs can call. This procedure puts the parameters of the system call in a specified place, such as the machine registers; it then issues a TRAP instruction, which is a kind of protected procedure call, to start the operating system. The purpose of the library procedure is to hide the details of the TRAP instruction and make system calls look like ordinary procedure calls.

When the operating system gets control after the TRAP, it examines the parameters to see if they are valid, and if so, performs the work requested. When it is finished, the operating system puts a status code in a register, telling whether it succeeded or failed, and executes a return from trap instruction to return control back to the library procedure. The library procedure then returns to the caller in the usual way, returning the status code as a function value. Sometimes additional values are returned in the parameters.

Understanding the Technology

Littlejohn Shinder , Michael Cross , in Scene of the Cybercrime (Second Edition) , 2008

Client Software

Most modern operating systems can also function as network clients. For example, if you were running Windows 2008 on your computer, you could log on to the network as a user, run programs, and use it as you would Windows Vista. With the exception of NetWare, this is common among many server operating systems. However, it would be inefficient and costly to run Windows Server 2008, for example, as a desktop client as it costs considerably more than the desktop operating system. UNIX is most often used as a server, but Linux has grown in popularity as a desktop/client OS. Mac OS X comes in both client and server forms. Novell doesn’t make a client OS of its own; NetWare clients generally run Windows or UNIX operating systems with NetWare client software installed.

This brings up an important point: Client machines don’t necessarily have to run an operating system made by the vendor of the network’s server software. Macintosh and UNIX-based clients can access Windows servers, Windows and Macintosh clients can access UNIX servers, and so forth. As shown in the Figure 4.16 , the Novell client for Windows is used to supply a username and password, which is then sent to a Novell server. The Novell server then uses eDirectory to authenticate the user and to determine what the user is permitted to access, and may access a script to map drives to locations on the network. As a result, the user will see a variety of new drive letters, which allow the user to store files on network servers.

Figure 4.16 . The Novell Client

Buffer Overflow

Dynamic Loading New Libraries

Most modern operating systems support the notion of dynamic shared libraries. They do this to minimize memory usage and reuse code as much as possible. As I said in the last section, you can use whatever is loaded to your advantage, but sometimes you may need something that isn’t already loaded.

Just like code in a program, a payload can chose to load a dynamic library on demand and then use functions in it. We examined a example of this in the simple Windows NT exploit example.

Under Windows NT, there are a pair of functions that will always be loaded in a process space, LoadLibrary() and GetProcAddress(). These functions allow us to basically load any DLL and query it for a function by name. On UNIX, it is a combination of dlopen() and dlsym().

These two functions both break down into categories, a loader, and a symbol lookup. A quick explanation of each will give you a better understanding of their usefulness.

Читайте также:  Newest windows update problems

A loader like LoadLibrary() or dlopen()loads a shared piece of code into a process space. It does not imply that the code will be used, but that it is available for use. Basically, with each you can load a piece of code into memory that is in turn mapped into the process.

A symbol lookup function, like GetProcAddress() or dlsym(), searches the loaded shared library’s export tables for function names. You specify the function you are looking for by name, and it returns with the address of the function’s start.

Basically, you can use these preloaded functions to load any DLL that your code may want to use. You can then get the address of any of the functions in those dynamic libraries by name. This gives you nearly infinite flexibility, as long as the dynamic shared library is available on the machine.

There are two common ways to use dynamic libraries to get the functions you need. You can either hardcode the addresses of your loader and symbol lookups, or you can search through the attacked process’s import table to find them at runtime.

Hardcoding the addresses of these functions works well but can impair your code portability. This is because only processes that have the functions loaded where you have hardcoded them will allow this technique to work. For Windows NT, this typically limits your exploit to a single service pack and OS combo, for UNIX, it may not work at all, depending on the platform and libraries used.

The second option is to search the executable file’s import tables. This works better and is more portable, but has the disadvantage of being much larger code. In a tight buffer situation where you can’t tuck your code elsewhere, this may just not be an option. The simple overview is to treat your shellcode like a symbol lookup function. In this case, you are looking for the function already loaded in memory via the imported functions list. This, of course assumes that the function is already loaded in memory, but this is often, if not always, the case. This method requires you to understand the linking format used by your target operating system. For Windows NT, it is the PE, or portable executable format. For most UNIX systems, it is the Executable and Linking Format (ELF).

You will want to examine the specs for these formats and get to know them better. They offer a concise view of what the process has loaded at linkage time, and give you hints into what an executable or shared library can do.

Acquiring Data, Duplicating Data, and Recovering Deleted Files

Littlejohn Shinder , Michael Cross , in Scene of the Cybercrime (Second Edition) , 2008

Swap and Page Files

Most modern operating systems utilize a feature called virtual memory, which allows the system to “fool” applications into thinking the computer has more RAM than is actually installed. A portion of the hard disk is used to emulate additional memory and data is “swapped” from real physical memory to this holding space on disk as it’s needed by the processor. On Windows 9x, this data is held in a file called the swap file. On Windows NT, 2000, XP and Vista systems, it is called the page file because data is swapped in units called pages. Linux systems create a swap partition on the disk for this same purpose. These files are generally created automatically by the operating system.

These files contain all sorts of data, including e-mail, Web pages, word processing documents, and any other work that has been performed on the computer during the work session. Many computer users are either unaware of the existence of these files or don’t really understand what they are, what they do, and what kind of data they contain. Some swap files are temporary and others are permanent, depending on the operating system in use and how it is configured. The files might be marked with the hidden attribute, which makes them invisible in the directory structure under default settings. Swap files are created by the operating system in a default location. Table 7.6 shows the swap filename and its default location for different Microsoft operating systems. Note that technically savvy users can change the location of the swap file or create additional swap/page files so that there are multiple virtual memory locations on a system.

Table 7.6 . Swap Filenames and Locations

Operating System Filename Default Location
Windows 3.x 386SPART.PAR Windows\System subdirectory or root directory of the drive designated in the virtual memory dialog box
Windows 9x WIN386.SWP Root directory of the drive designated in the virtual memory dialog box
Windows NT/2000/XP PAGEFILE.SYS Root directory of the drive on which the system root directory (WINNT by default) is installed

To find the location of the swap or page file, open the Virtual Memory dialog box. (This is also where a user can change the file’s location.) For example, in Windows XP Professional, open the System applet from the Control Panel, click the Advanced tab, click the Settings button under Performance, then click the Advanced tab again, and click the Change button at the bottom of the page under Virtual Memory. This series of steps brings you to the Virtual Memory dialog box (at last!), and you can see the location of one or more page files, as shown in Figure 7.8 .

Figure 7.8 . Viewing the Location, Size, and Status of the Page File(s) on Windows XP Using the Virtual Memory Dialog Box

You can then navigate to the drive on which the file is stored and locate it there. Note, however, that the page file will not be visible unless you have unchecked the Hide protected operating system files (recommended) checkbox in the Tools | Folder Options | View advanced settings in Windows Explorer.

You can view the swap/page file with a utility such as DiskEdit, but much of the information is binary (0s and 1s) and not very usable. Special programs such as NTA Stealth and the Filter I “intelligent forensic editor” are designed to read swap file data and other ambient computer data. Filter I uses a type of artificial intelligence (AI) to locate fragments of various types of files, including e-mail, chat conversations, newsgroup posts, and even network passwords and credit card and Social Security numbers. NTA Stealth is an upgrade to the Net Threat Analyzer tool, and is used to evaluate Internet browsing, download activity, and e-mail communications in ambient data for evidence related to illegal activities. Both of these software packages are marketed by NTI ( www.forensics-intl.com ). The company also makes text search and disk search programs that can search storage devices at the physical level and locate data that is stored between allocated partitions or text strings that are in unallocated space.

Run-time support systems

Dimitrios Serpanos , Tilman Wolf , in Architecture of Network Systems , 2011

Networking software in operating systems

Practically all modern operating systems have built-in support for networking functionality. This functionality not only includes the protocol processing necessary to have a computer act as an end system (i.e., by implementing application and transport layer processing), but also the functionality that is necessary for router systems (i.e., packet forwarding).

Figure 14-2 shows the main software components related to networking in a typical Unix-based operating system (e.g., NetBSD, Linux). Above the network interface drivers, each layer in the protocol stack (link layer, network layer, transport layer) has its own processing component in the operating system kernel. If different protocol stack configurations are used (e.g., UDP instead of TCP or a different link layer), these processing components can be combined differently. For simplicity, we only show the TCP/IP stack in Figure 14-2 . Applications, which are located in user space, use the socket interface to interact with the networking protocol stack. Also located in user space are some components of the control plane software. Route daemons handle route update computations and update to the forwarding table in the IP forwarding component. Other control plane software components (e.g., error handling in network layer) are part of the kernel.

Figure 14-2 . Networking support in an operating system.

In addition to networking software built into the kernel, operating systems also provide a set of command line tools for administrators to manage system configuration. These tools can be used to configure IP addresses of network interfaces, set up static routes, obtain monitoring information, etc. For a more detailed discussion of network software implementation inside the NetBSD operating system, see the excellent book by Wright and Stevens [ 189 ].

Encrypt and Authenticate Modes

Tom St Denis , Simon Johnson , in Cryptography for Developers , 2007

CCM Implementation

We use the CCM implementation from LibTomCrypt as reference. It is a compact and efficient implementation of CCM that performs the entire CCM encoding with a single function call. Since it accepts all the parameters in a single call, it has quite a few parameters. Table 7.3 matches the implementation names with the design variable names.

Table 7.3 . CCM Implementation Guide

Design Name Implementation Name Function
cipher Index into LTC tables for 128-bit cipher to use, makes CCM agnostic to the cipher choice
K key Secret key
keylen Length of key in octets
uskey Previously scheduled key, used to save time by not using key schedule
N none The CCM nonce
noncelen Length of nonce in octets
header The header or AAD data
headerlen Length of header in octets
P Pt Plaintext
Q ptlen Length of plaintext in octets
C ct Ciphertext
T tag The MAC tag
t taglen The length of the MAC tag desired

We allow the caller to schedule a key prior to the call. This is a good idea in practice, as it shaves a fair chunk of cycles per call. Even if you are not using LibTomCrypt, you should provide or use this style of optimization in your code.

We provide a word-oriented optimization later in this implementation. This check is a simple sanity check to make sure it will actually work.

We force the tag length to even and truncate it if it is larger than 16 bytes. We do not error out on excessively long tag lengths, as the caller may simply have passed something such as

In which case, MAXTAGLEN could be larger than 16 if they are supporting more than one algorithm.

We must reject tag lengths less than four bytes per the CCM specification. We have no choice but to error out, as the caller has not indicated we can store at least four bytes of MAC tag.

LibTomCrypt has the capability to “overload” functions—in this case, CCM. If the pointer is not NULL, the computation is offloaded to it automatically. In this way, a developer can take advantage of accelerators without re-writing their application. This technically is not part of CCM, so you can avoid looking at this chunk if you want.

Here we compute the value of L (q in the CCM design). L was the original name for this variable and is why we used it here. We make sure that L is at least 2 as per the CCM specification.

This resizes the nonce if it is too large and the L parameter as required. The caller has to be aware of the tradeoff. For instance, if you want to encrypt one-megabyte packets, you will need at least three bytes to encode the length, which means the nonce can only be 12 bytes long. One could add a check to ensure that L is never too small for the plaintext length.

If the caller does not supply a key, we must schedule one. We avoid placing the scheduled key structure on the stack by allocating it from the heap. This is important for embedded and kernel applications, as the stacks can be very limited in size.

This section of code creates the B0 value we need for the CBC-MAC phase of CCM. The PAD array holds the 16 bytes of CBC data for the MAC, while CTRPAD, which we see later, holds the 16 bytes of CTR output.

The first byte (line 122) of the block is the flags. We set the Adata flag based on headerlen, encode the tag length by dividing taglen by two, and finally the length of the plaintext length is stored.

Next, the nonce is copied to the block. We use 16 – L + 1 bytes of the nonce since we must store the flags and L bytes of the plaintext length value.

To make things a bit more practical, we only store 32 bits of the plaintext length. If the user specifies a short nonce, the value of L has to be increased to compensate. In this case, we pad with zero bytes before encoding the actual length.

We are using CBC-MAC effectively with a zeroed IV, so the first thing we must do is encrypt PAD. The ciphertext is now the IV for the CBC-MAC of the rest of the header and plaintext data.

We only get here and do any of the following code if there is header data to process.

The encoding of the length of the header data depends on the size of the header data. If it is less than 65,280 bytes, we use the short two-byte encoding. Otherwise, we emit the escape sequence OxFF FE and then the four-byte encoding. CCM supports larger header sizes, but you are unlikely to ever need to support it.

Note that instead of XORing the PAD (as an IV) against another buffer, we simply XOR the lengths into the PAD. This avoids any double buffering we would otherwise have to use.

This loop processes the entire header data. We do not provide any sort of LTC_FAST optimizations, since headers are usually empty or very short. Every 16 bytes of header data, we encrypt the PAD to emulate CBC-MAC properly.

If we have leftover header data (that is, headerlen is not a multiple of 16), we pad it with zero bytes and encrypt it. Since XORing zero bytes is a no-operation, we simply ignore that step and invoke the cipher.

This code creates the initial counter for the CTR encryption mode. The flags only contain the length of the plaintext length. The nonce is copied much as it is for the CBC-MAC, and the rest of the block is zeroed. The bytes after the nonce are incremented during the encryption.

If we are encrypting, we handle all complete 16-byte blocks of plaintext we have.

The CTR counter is big endian and stored at the end of the ctr array. This code increments it by one.

We must encrypt the CTR counter before using it to encrypt plaintext.

This loop XORs 16 bytes of plaintext against the CBC-MAC pad, and then creates 16 bytes of ciphertext by XORing CTRPAD against the plaintext. We do the encryption second (after the CBC-MAC), since we allow the plaintext and ciphertext to point to the same buffer.

Encrypting the CBC-MAC pad performs the required MAC operation for this 16-byte block of plaintext.

We handle decryption similarly, but distinctly, since we allow the plaintext and ciphertext to point to the same memory. Since this code is likely to be unrolled, we avoid having redundant conditional code inside the main loop where possible.

This block performs the CCM operation for any bytes of plaintext not handled by the LTC_FAST code. This could be because the plaintext is not a multiple of 16 bytes, or that LTC_FAST was not enabled. Ideally, we want to avoid needing this code as it is slow and over many packets can consume a fair amount of processing power.

We finish the CBC-MAC if there are bytes left over. As in the processing of the header, we implicitly pad with zeros by encrypting the PAD as is. At this point, the PAD now contains the CBC-MAC value but not the CCM tag as we still have to encrypt it.

The CTR pad for the CBC-MAC tag is computed by zeroing the last L bytes of the CTR counter and encrypting it to CTRPAD.

If we scheduled our own key, we will now free any allocated resources.

CCM allows a variable length tag, from 4 to 16 bytes in length in 2-byte increments. We encrypt and store the CCM tag by XORing the CBC-MAC tag with the last encrypted CTR counter.

This block zeroes memory on the stack that could be considered sensitive. We hope the stack has not been swapped to disk, but this routine does not make this guarantee. By clearing the memory, any further potential stack leaks will not be sharing the keys or CBC-MAC intermediate values with the attacker. We only perform this operation if the user requested it by defining the LTC_CLEAN_STACK macro.

In most modern operating systems , the memory used by a program (or process) is known as virtual memory. The memory has no fixed physical address and can be moved between locations and even swapped to disk (through page invalidation). This latter action is typically known as swap memory, as it allows users to emulate having more physical memory than they really do.

The downside to swap memory, however, is that the process memory could contain sensitive information such as private keys, usernames, passwords, and other credentials. To prevent this, an application can lock memory. In operating systems such as those based on the NT kernel (e.g., Win2K, WinXP), locking is entirely voluntary and the OS can choose to later swap nonkernel data out.

In POSIX compatible operating systems, such as those based on the Linux and the BSD kernels, a set of functions such as mlock(), munlock(), mlockall(), and so forth have been provided to facilitate locking. Physical memory in most systems can be costly, so the polite and proper application will request to lock as little memory as possible. In most cases, locked memory will span a region that contains pages of memory. On the x86 series of processors, a page is four kilobytes. This means that all locked memory will actually lock a multiple of four kilobytes.

Ideally, an application will pool its related credentials to reduce the number of physical pages required to lock them in memory.

Upon successful completion of this function, the user now has the ciphertext (or plaintext depending on the direction) and the CCM tag. While the function may be a tad long, it is nicely bundled up in a single function call, making its deployment rather trivial.

Читайте также:  Как отформатировать жесткий диск с windows через bios
Оцените статью