• ARTICLES
SEARCH

GEEK GLOSSARY / TECH TERMS

S.M.A.R.T., Self-Monitoring, Analysis, and Reporting Technology, typically written as simply SMART, is a monitoring system built into modern hard drives designed to detect and report a set of indicators which allows the end user to assess the stability of the drive.

about 6 months ago - by  |  Comments (0)

Data transfer rate is the measurement that encompasses both the internal transfer speed of a hard drive (movement of data from the disk surface to the disk controller in the drive) and external transfer speed (data movement between the disk controller and the host operating system). The data transfer rate is typically benchmarked and recorded as the slowest of these two numbers in order to represent the real world conditions under which the device transfers data.

about 6 months ago - by  |  Comments (0)

Seek time refers to the amount of time it takes a hard drive to respond to a request for a particular piece of data. In traditional magnetic hard drives, the seek time includes both the electronic communication between the operating system, motherboard, and hard drive itself, as well as the physical movement of the components within the hard drive (such as the actuator arm that moves the read/write head). Typical seek times for mechanical hard drives range from 4ms (for high speed server drives) to 15ms (for slower mobile or low-end consumer drives).

about 6 months ago - by  |  Comments (0)

The Host Protected Area is a section of a hard disk that has been specially formatted and flagged so that it does not appear to the host operating system. This portion of the hard disk can be used for a variety of purposes including storing hidden data, security software to track stolen laptops, and vendor-specific utilities, but it is most typically used to house recovery software. Many desktop and laptop computers no longer ship with operating system reinstallation/recovery discs, for example, but instead include a large Host Protected Area that houses a recovery program that is accessible from the computer’s BIOS menu.

about 6 months ago - by  |  Comments (0)

The last step in preparing a hard disk for use, high-level formatting is the process of setting up an empty file system in a new partition for use by an operating system. For example, when you install a new hard disk into your desktop computer, the final step after creating the partition is to instruct your formatting tool to create a file system (such as NTFS) in the given hard disk so that your operating system can access the drive and use it for storage.

about 6 months ago - by  |  Comments (0)

Partitioning is the process of dividing a hard drive into a single logical storage unit or a series thereof. Partitioning allows a single physical disk to be divided into multiple logical disks for various purposes, including the separation of the operating system disk from the data storage disk, the installation of multiple operating systems, or other applications dependent on the division of data.

about 6 months ago - by  |  Comments (0)

Low-level formatting is a hardware level process that marks the surface of the disk with a marker indicating the start of a recording block. This block is typically referred to as a sector marker and is referenced by the disk controller in order to read and write data to the disk.

about 6 months ago - by  |  Comments (0)

Disk formatting is the process of preparing a disk or other storage medium for use by a particular operating system. The formatting process is typically divided into three distinct operations. First, a low-level format prepares the media for use. Second the media is partitioned with one or more partitions. Third, a high-level format applies a file system (such as FAT32 or NTFS) to the newly created partition(s).

about 6 months ago - by  |  Comments (0)

Zero-filling is the process of overwriting data with a series of zeros. You partially or completely zero-fill a given storage container to either overwrite the space where a recently deleted file was on the hard disk or to completely wipe the hard disk and all the files, folders, and other data structures contained therein.

about 6 months ago - by  |  Comments (0)

The Gutmann Method is a an algorithm for securely erasing the contents of a computer hard drive. Introduced by Peter Gutmann in 1996, it utilizes a series of 35 patterns to completely and redundantly overwrite the contents of a hard disk. The method, and the white paper in which Gutmann outlined its use, was widely misapplied and misinterpreted–although many people used the full 35-pass technique, Gutmann never intended for the method to be used from start to finish in such a fashion.

about 6 months ago - by  |  Comments (0)

DBAN is an acronym for Darik’s Boot and Nuke, an open source project. DBAN is a designed to faciliate simple and secure erase of hard drives so that data is no longer recoverable. It uses random number overwrites and includes scripts for the Gutmann Method, Quick Erase, and Department of Defense approved overwrites (3 and 7 pass).

about 6 months ago - by  |  Comments (0)

In cryptography, both analog and digital, a cipher is an algorithm for transforming plaintext to ciphertext (unencrypted to encrypted) and reversing the process. A cipher could be as easy as shifting the vowels of the alphabet forward one (a shift-cipher) so that A becomes B and B becomes C, all the way around until Z because A. Modern encryption relies on radically more sophisticated ciphers that use advanced computations, split keys, and other cryptographic tricks only feasible with the aid of computers.

about 6 months ago - by  |  Comments (0)

In a public-key cryptographic scheme, there are two distinct keys used to encrypt and decrypt data. The public key is used to encrypt data for the recipient and then the recipient’s private key is used to decrypt it. Thus if someone wanted to send a secure email message to Bill Gates via the widely used Pretty Good Privacy (PGP) email encryption program, they could look up Mr. Gates’ public key on a public key server, use that key to encrypt his message, and then only he could turn around and use his private key to decrypt it.

about 6 months ago - by  |  Comments (0)

In a private-key cryptographic scheme, the same key is used to encrypt the data as is used to decrypt the data. Although there  is an inherent security risk with private-key encryption schemes as all parties must share the same key in order for the system to function, there are several widely adopted private-key encryption schemes, including TwoFish and AES.

about 6 months ago - by  |  Comments (0)

Ciphertext is a term used in cryptography and computer security to refer to text which has been run through an encryption algorithm (the cipher which converted the text from plain, human readable text, to encrypted text). If you send an encrypted email using a public key system, for example, part of the email is plaintext and part of the email is cipher text. The header of the email and the information on how to retrieve the public key will be in plaintext and the encrypted portion of the email intended only for the recipient will be in ciphertext.

about 6 months ago - by  |  Comments (0)

Plaintext is a term used in cryptography and computer security to refer to text which is transferred without encryption. Unencrypted emails, for example, are sent entirely as plaintext when transferred between email hosts, as are instant messages and a wide variety of communications. Even encrypted communication channels typically use plaintext during the very initial negotiation before switching to an entirely encrypted exchange.

about 6 months ago - by  |  Comments (0)

The Terms of Service is set of rules an individual or organization must abide by in order to use a service. A Terms of Service agreement is legally binding (except in cases where the individual is below a certain age and does not have parental consent or the terms of service violates local, state, or federal laws) and generally covers various aspects of the user’s interaction with the service, such as how the user’s data will be used, how the user can use the service (e.g. you can send personal email but you can’t send commercial email), and so on.

about 6 months ago - by  |  Comments (0)

In computer science, a cache is a hardware component that stores frequently used pieces of data for more rapid access. Modern CPUs, for example, have both instructional and data caches to increase the speed at which the processor can access instruction sets and data.

about 6 months ago - by  |  Comments (0)

Within the context of video gaming, frames per second (FPS)  refers to the speed at which the screen image is refreshed. The more frames that can be fully rendered per second, the more fluid the game play appears to the player. When frame rates drop below 30 FPS in an action heavy game (such as a first person shoot like Halo or Call of Duty), the action appears choppy to the human eye. For most games 30-60 FPS is considered acceptable, but increasingly powerful hardware and sophisticated games have pushed the envelope to above 100 FPS, a rate of refresh which provides extremely fluid and smooth in-game movement.

about 6 months ago - by  |  Comments (0)

In computing, a buffer is a portion of physical memory storage set aside to temporarily store data while it is being moved and/or processed in some fashion. While buffers are used silently all around us as computers move data back and forth (such as when you spool up a giant set of playlists to copy over to your MP3 player), sometimes the buffers are readily apparent. When you’re waiting for a YouTube video to load, for example, your web browser is trying to decode enough of the video and store it in a buffer so that any latency or other data transfer issues will be smoothed over before the buffered video catches up.

about 7 months ago - by  |  Comments (0)

An obfuscator is a program designed to make it difficult to understand or reverse engineer source code. The obfuscator takes the clean human-readable source code the programmer has created and does a thorough job shuffling it around, changing simple variables to confusing ones, and otherwise making it difficult for another person to sit down and read the original clean copy (but all while still maintaining the functionality of the source code).

about 7 months ago - by  |  Comments (0)

Transcoding is the process of shifting the encoding method of a given media file. For example, if you had a music album in AAC format but the media-playback function in your car’s in-dash digital music player would only accept MP3 files, you would need to use software to transcode the AAC encoded album into an MP3 encoded album.

about 7 months ago - by  |  Comments (0)

Just as a compiler turns high-level programming languages into low-level programming languages in order to run them on the computer, a decompiler reverses the process and takes low-level programming language (like machine code) and translates it into higher-level programming languages (like C++).

about 7 months ago - by  |  Comments (0)

A compiler is a computer program (or set of programs) that converts source code from the original programming language into another computer language. Typically, this process is used to convert a high-level programming language (such as C++) into a lower level language (such as machine code) so that the program can be run as an executable.

about 7 months ago - by  |  Comments (0)

High-level programming languages are computer programming languages with a strong abstraction that provides a high degree of human readability. Unlike low-level programming languages which work by directly interacting with the processor (and as such are machine, not human, readable), high-level programming languages provide natural language systems and structures (such as IF statements and other human-readable functions) that make it easier for programmers to work with the language.

about 7 months ago - by  |  Comments (0)