M & E Track
Music And Effects only track, often recorded separately from the Final Mix, so that foreign language dialogue can be added later.
In the typical 4:2:0 picture representation used by MPEG-2, a macroblock consists of four eight by eight blocks of luminance data (arranged in a 16 by 16 sample array) and two eight by eight blocks of color difference data which correspond to the area covered by the 16 by 16 section luminance component of the picture. The macroblock is the basic unit used for motion compensated prediction.
See also: Block, Slice.
The date a cassette was made.
An information storage medium that is magnetically sensitive only at high temperatures, while stable at normal temperatures. A laser is used to heat a small spot on the medium (disc), allowing a normal magnet to change its polarity. The ability to tightly focus the laser greatly increases the data density over standard magnetic media. Magneto-Optical Disc drives can store up to 650 MB on a disc and have the capability of recording and erasing data for re-recording.
Part of the implementation of the MPEG-2 compression scheme with maximum resolution equivalent to ITU-R Recommendation 601. See MPEG-2.
A subset of the syntax of the MPEG-2 video coding specification that is expected to be supported over a large range of applications.
The ability to take a two-dimensional graphics file of texture—wood grain, chrome, etc. and project it on an object, thus, “wrapping” the object with the chosen texture.
Conversion of bytes (8 bits) to 2n-bit wide symbols. n is thus the bit width for the I and Q quantization; e.g., at 65 QAM the symbol width is 2n = 6 bit, n = 3 (I and Q are subdivided into 23 = 8 amplitude values each).
Marcoblocking refers to 'blockiness.' When macroblocking gets severe enough that chroma information starts to get lost it is called "pixelization". The veil-like shimmer that can appear around a slowly moving object is "mosquitoing" or "netting" - either way it looks like the object is wearing a mosquito net. "Bearding" or "tearing" (depending on who you talk to) is when a bright object is placed against a dark background and loses some edge definition (first noticed with a face - hence bearding). Much like audio engineers, video types aren't an imaginative bunch :)(wah-wah, flanging, chorus.....)
A Mark represents the point where a programmed event begins. There is only one Mark associated with an event and that event continues until the next Mark. Events must have a start Mark, but do not need a defined end. A single DUI session can store 9999 events.
See Material Acquisition Routing System. -- Also Military Armature Radio Service
An original recording, of a finished product. A Safety Master is one which is only used as a last resource, so that the risk of loss, damage or corruption is minimized. The Copy Master is usually a dub, or second original recording from which all copies are made.
Master Control Switcher
1. An A/V switching device with many-in/one-out cross points. Used to switch A/V signals from various sources to the transmission signal path. 2. A switcher with one output that is capable of generating advanced video effects such as mixes and keys.
Match Frame Edit
An edit where the video and audio do not change from where the previous edit ended. The match frame edit is primarily used to do A/B transitions such as dissolves and effects between two source decks.
A general term for programs or commercials
Material Acquisition Routing System
System designed to provide monitoring of external A/V signals as received by a facility.
Unique identification code given to a piece of material stored within a system.
1. A set of equations used to combine signals. 2. Electronic circuit designed to achieve such an equation 3. A system of connecting equipment via crossover points, so that any machine can play to or record from any other machine.
A video switching device which allows an operator to route any video/audio signal to any source or monitor in the system.
Any Circuit in which something new is developed from more basic elements. Within a camera, the matrix circuit creates the luminance “Y” signal and the two color difference signals “R-Y” and “B-Y” or “I” and “Q” signals. It creates these signals from the basic R,G, and B signals. Matrixing of the signal takes place prior to the encoding process.
MatteA black & white high contrast image that suppresses or cuts a hole in the background picture to reveal the foreground picture
Mavica® (Magnetic video camera)
Maximum Video Levels
1,000,000 bits per second.
Abbreviation for MegaByte per second, a data transmission rate in millions of bytes per second.
MBS (Multi-channel Broadcast System)
MCS (Master Control Switcher)
See DVS and DVS-M1000C.
The focusing of a camera lens or pick-up device by manual adjustment.
Materials or technical means of communication—using films, art, video, voice, music, computer programming, etc.
Apple’s strategy for incorporating media such as sound and video into the Macintosh operating system so that motion video, voice, and music can be easily integrated into every application as graphics are today.
Part of the plant where physical media (tapes, etc.) are stored.
External Masters, House Masters, Blank Stock, Copies and Clones or Dubs. The management of all these media types within TCS.
A video server by BTS/Phillips
When used with any unit of measure is one million times the base unit.
A unit of measurement equal to 1024 kilobytes (K) or 1,048,576 bytes. Data files: especially sound, graphics, and digital video files are measured in Mb.
A stored set of parameters that can be changed on a scene by scene basis. There are various types of memory, but they all store these same parameters. The Renaissance 8:8:8 uses Base mems, Scratchpad mems, Event mems, Original Scene mems, and Preview mems.Metadata
Information other than Essence, that has no inherent stand-alone value but is related to Essence (i.e. it is contextual and has no meaning outside of its relation-ship to the associated Essence). Examples of Metadata include: URL, URI, timecode, MPEG-2 PCR, filename, program labels, copyright information, version control, watermarking, conditional-access keys, etc. Metadata will either travel through the system, multiplexed or embedded with the Essence from one or more of the other planes, or it will be stored in a known location for subsequent reference.
Informational data about the data itself. Typically information about the audio and video data included in the signal's data stream.
Mezzanine Compression or Mezzanine Rate
Contribution level quality encoded high definition television signals. Typically split into two levels: High Level at approximately 140 Mbps and Low Level at approximately 39 Mbps (for high definition within the studio, 270Mbps is being considered). These levels of compression are necessary for signal routing and are easily re-encoded without additional compression artifacts (concatenation) to allow for picture manipulation after decoding. DS3 at 44.736 will be used in both terrestrial and satellite program distribution.
Abbreviation for megahertz, or millions of cycles per second. The bandwidth of the NTSC video signal is 4.2 MHz. A normal U.S. television transmission channel is 6 MHz.
Miniature icons: Small graphic representations of media element types (Graphics, videotape, videodisc, Mac Sound, CD Sound, etc.)
Part of the electromagnetic spectrum, microwaves are high-frequency waves that exhibit highly directional characteristics. Microwave transmission of video signals is a common method of linking several sites in a CCTV system. Transmission by microwave requires modulating the video signal to a microwave carrier frequency, and demodulating the signal at the receiving end. Microwave transmission provides excellent picture resolution and can deliver signals in a line-of-sight over a radius of many miles.
MIDI (Musical Instrument Digital Interface)
MIDI is the communication standard for exchanging digital data between a musical instrument (e.g. a synthesizer or electric piano) and a MIDI equipped device.
One one-thousandth of an inch.
Multipurpose Internet Mail Extensions. A protocol for Internet e-mail that enables the transmission of non-text data such as graphics, audio, video and other binary types of files.
Mirror Mother Tape
In high-speed duplication Sprinter ® (Sony) systems, a Mirror Mother machine makes a mirror image of the master tape. It is the mirror mother tape which is used on the Sprinter as a master for high-speed duplication.
1. The process of combining several elements, audio or video, into a composite. 2. A fade between two video images.
A device used to combine audio signals. Also when used in the United Kingdom, refers to the person who switches a show. The vision mixer performs a very similar job as the technical director in North America.
Combining two or more audio or video sources.
A device that transforms a typical two-level computer signal into a form suitable for transmission over a telephone line. Also does the reverse-transforms an encoded signal on a telephone line into a two-level computer signal.
Modular vs. Built-in
Modular systems offer users the flexibility to relocate their equipment when facility plans change. Built-in systems offer greater ability to customize equipment to fit a unique application; for example, where more than two monitors are used or where very large monitors are required.
The process of adding information in the form of an analog signal to an existing signal carried by a transmission medium, i.e., a carrier. The added signal effectively "rides along" the transmission signal.
The technique of processing a signal to transform the signal content to different frequencies so that the signal can be transmitted or multiplexed (see AM and FM)
An analog color test signal.
An analog color test signal.
A picture fault, caused by detail with information to fine to be accurately reproduced in a recording, which is characterized by a pattern of wavy lines over all or part of the picture. So called because of its resemblance to the pattern of watermarked silk.
A seamless MPEG-2 concatenation technology developed by the ATLANTIC project (BBC [U.K.], Centro Studi e Laboratori telecomunicazione [Italy], Ecole Nationale Superieure des Telecommunications [France], Ecole Polytechnique Fédérale de Lausanne [Switzerland], Electrocraft [U.K.], Fraunhofer-Institut für Integrierte Schaltungen [Germany], Instituto de Engenharia de Sistemas e Computadores [Portugal], Snell & Wilcox [U.K.]) in which an MPEG-2 bitstream enters a Mole-equipped decoder, and the decoder not only decodes the video, but the information on how that video was first encoded (motion vectors and coding mode decisions). This "side information" or "metadata" in an information bus is synchronized to the video and sent to the Mole-equipped encoder. The encoder looks at the metadata and knows exactly how to encode the video. The video is encoded in exactly the same way (so theoretically it has only been encoded once) and maintains quality.
If an opaque bug is inserted in the picture, the encoder only has to decide how the bug should be encoded (and then both the bug and the video have been theoretically encoded only once).
Problems arise with transparent or translucent bugs, because the video underneath the bug must be encoded, and therefore that video will have to be encoded twice, while the surrounding video and the bug itself have only been encoded once theoretically.
What Mole can not do is make the encoding any better. Therefore the highest quality of initial encoding is suggested.
A video display that does not contain a channel selector or an RF input. A monitor only accepts video signals in composite or component form, not "over the air" as a TV set does. It can refer to any place in a television system where visual or audible observation at that point occurs.
A single-color video signal (e.g. black or white) or the luminance component of the signal.
1. A composite picture made up of several images. 2. The production of a rapid succession of images to illustrate an association of ideas, or a passage of time.
Used to edit content-controlled turnaround material. In addition, where interstitial material which is not tone triggered is edited into turnaround content.
Computer generated special effect whereby one image is caused to metamorphose into another. Made famous, and popular by Michael Jackson's video, and “Terminator - Judgment Day”
Mute. Slang for silent shooting, from the slang German "Mit Out Sprechen" -("without talking.") The correct German phrase would be "ohne sprechen.")
Metal Oxide Separation - Field Effect Transmission. Technology used in special purpose transistors.
The use of motion vectors to improve the efficiency of the prediction of pixel values. The prediction uses motion vectors to provide offsets into past and/or future reference frames containing previously decoded pixels that are used to form the prediction and the error difference signal.
An image compression technique that achieves compression by describing only the motion differences between adjacent frames, thus eliminating the need to convey redundant static picture information from frame to frame. Used in the MPEG standards.
A video version of the JPEG picture compression scheme developed for still images. Motion-JPEG is used to compress a series of video frames to allow recording on a computer hard disk. . For example, 30 Motion-JPEG frames viewed in one second will approximate 30-frame per second video. Not to be confused with MPEG, which also time compresses as well as space compresses video.
Moving Coil Microphone
A low-impedance type of microphone that operates on electromagnetic principals.
Moving Pictures Expert Group (MPEG)
The name of family of standards used for coding audio-visual information (e.g., movies, video, music) in a digital compressed format.
MPEG Layer 3. An new standard for audio compression. It is capable of 10:1 compression with no noticeable loss in quality. MP3s have become a popular way to distribute CD quality music on the Internet.
MP@HL (Main Profile at High Level)
MP@ML (Main Profile and Main Level)
MPEG (Moving Picture Experts Group)
Compression standards for moving images conceived by the Motion Pictures Expert Group, an international group of industry experts set up to standardize compressed moving pictures and audio. MPEG-2 is the basis for ATSC digital television transmission.
Its work follows on from that of JPEG to add interfield compression, the extra compression potentially available through similarities between successive frames of moving pictures. Four MPEG standards were originally planned but the accommodation of HDTV within MPEG-2 has meant that MPEG-3 is now redundant. MPEG-4 is intended for unrelated applications, however can be used to display ATSC formats on a PC. The main interest for the television industry is in MPEG-1 and MPEG-2. A group of picture blocks, usually four, which are analyzed during MPEG coding to give an estimate of the movement between frames. This generates the motion vectors that are then used to place the macroblocks in decoded pictures.
See also: B frames, GOP, I frames, P frames.
A group of picture blocks, usually four, which are analyzed during MPEG coding to give an estimate of the movement between frames. This generates the motion vectors that are then used to place the macroblocks in decoded pictures. This was designed to work at 1.2 Mbits/sec, the data rate of CD-ROM, so that video could be played from CDs. However the quality is not sufficient for TV broadcast.
Refers to ISO/IEC standards 11172.1 (Systems, 1172.2 (Video), 11172-3 (Audio), 11172-4 (Compliance Testing), and 1172-5 (Technical Report).
This has been designed to cover a wide range of requirements from "VHS quality" all the way to HDTV through a series of algorithm "profiles" and image resolution "levels." With data rates of between 1.2 and 15 Mbps, there is intense interest in the use of MPEG-2 for the digital transmission of television-including HDTV-applications for which the system was conceived. Coding the video is very complex, especially as it is required to keep the decoding at the reception end as simple and inexpensive as possible. MPEG-2 is the compression used by the ATSC and DVB standards.
MPEG can offer better quality pictures at high compression ratios than pure JPEG compression, but with the complexity of decoding and especially coding and the 12-long group of pictures (GOP), it is not an ideal compression system for editing. If any P or B frames are used then even a cut will require the re-use of complex, and not perfect, MPEG coding. However, MPEG Splicers are beginning to appear to alleviate this difficulty.
Of the five profiles and four levels creating a grid of 20 possible combinations, 11 have already been implemented. The variations these define are so wide that it would not be practical to build a universal coder or decoder. Interest is now focused on the Main profile, Main level, sometimes written as MP@ML, which covers broadcast television formats up to 720 pixels x 576 lines at 30 frames per second. These figures are quoted as maximums so 720 x 486 at 30 frames are included, as are 720 x 576 at 25 frames. As the coding is intended for transmission the economy of 4:2:0 sampling is used.
A recent addition to MPEG-2 is the studio profile. Designed for studio work its sampling is 4:2:2. The studio profile is written as 422P@ML. To improve the picture quality, higher bit rates are used. The first applications for this appear to be in electronic news gathering (ENG), and with some video servers.
See also: B frames, Compression, GOP, I frames, JPEG, P frames.
Refers to ISO/IEC standards 13818-1 (Systems), 138l8-2 (Video), 13818-3 (Audio), 13814 (Compliance Testing). MPEG 2 extends the MPEG 1 standard to cover a wider range of applications.
MPEG 3 was originally targeted for HDTV applications. This has now been incorporated into MPEG 2.
A set of coding tools for audio-visual objects developed to support various functionalities, such as object-based interactivity and scalability. A syntactic description of audio-visual objects, allowing a way of describing the coded representation of the objects and how they were coded. This information can be conveyed to a decoder, enabling new algorithms to be downloaded for execution. It is expected in late 1998.
Also referred to as Studio MPEG, Professional MPEG and 442P@ML. Sony's Betacam SX is based on MPEG 4:2:2. See: MPEG-2.
The ability to cut into an MPEG bitstream for switching and editing, regardless of type of frames (I, B, P).
Most significant bit. The bit that has the most value in a binary number or data byte. In written form, this would be the bit on the left.
For example: Binary 1110 = Decimal 14
In this example, the left-most binary digit, 1, is the most significant bit-here representing 8. If the MSB in this example were corrupt, the decimal would not be 14 but 6.
See also: LSB.
Maximum Transmission Unit. The greatest amount of data or "packet" size that can be transferred in one physical frame on a network. This packet also contains the header and trailer information, which are like addresses for each packet that are required by the routers on the network.
Multi-User Dungeon or Dimension. A usually text-based, multi-user simulation environment. Some are purely for fun and game playing, while others are used for serious software development, education purposes, and all that lies in between. A significant feature of most MUDs is that users can create things that stay after they leave and which other users can interact with in their absence, thus allowing a "world" to be built gradually and collectively.
An acronym for MUtual EVent. In lip sync technology a visual event and a corresponding sound event are MUEVs. For example a round shape of the lips and the sound "O" are MUEVs. The swing of a baseball bat and the crack of it hitting the ball are MUEVs. These are issues that can plague television and film.
Multiburst or Multiburst Pattern
A video test signal containing six packets of different frequencies that are used in measuring the frequency response of a video system.
The primary hardware portion of a Library Management System™ (LMS). It consists of a number of internal VTRs, an audio/video matrix switcher, cart controller, cassette storage console, VTR console, and elevator mechanism.
Point to multipoint broadcasting of content. I.E.: A live streaming event (e.g. Victoria's Secret's) that is viewed simultaneously by many end users.
Usually refers to any presentation of information using more than one medium; for example, a presentation using text and video. More commonly refers to the integration of various media (especially videodisc, computer graphics, and text) through the use of a Personal Computer.
A method of transmission where more than one channel of information or video signal is transmitted over a single signal path. To combine several different signals (video, audio, data) onto a single communication channel for transmission. Demultiplexing separates each signal on receipt. Multiplexing would be used to provide transmission of several video sources over one carrier, thereby reducing total transmission costs. Fiber optics, coaxial cable, and microwave transmission schemes are all capable of multiplex transmission.
Device for combining two or more electrical signals into a single, composite signal.
A communication system that allows three or more sites to both transmit and receive signals. Using telephone conversations as an example, a typical individual-to-individual telephone call is referred to as a point-to-point call. A conference call is referred to as a multipoint call.
A videoconference of three or more sites. A Multipoint Control Unit (MCU) can link up to eight or more remote sites into a single conference or manage as many as three simultaneous, independent conferences (segmenting). Multiple MCUs can be combined to increase the maximum number of sites in a single conference (cascading).
Similar to Multiburst.
A monitor capable of scanning at a range of frequencies, thus, allowing the use of various graphics adapters.
1. Sony cart system term for a cassette tape containing multiple short form material events the location of which is recorded on the tape as user bit time code data. 2. A program that consists of at least one segment. All of the segments share the same ID.
Multiple individual spots are placed on one physical tape. Each spot is identified by a media ID number (the spot ID number) and each tape by a box ID number. A multispot tape is intended for compile or cache tasks.
Recording technique where multiple tracks or channels of audio material are recorded separately and later mixed down to achieve optimum results for mono or stereo playback.
MUSICAM (Masking Pattern Adapted Universal Subband Integrated Coding and Multiplexing) Compression method for audio coding.
Images with no sound.
Proprietary name of the Du Pont Chemical Corporation, used to identify the base material of most videotapes