What is codecs used for
Then you decide which one is the best fit your needs. Lossless codecs are just like they sound. They reproduce video exactly as it is without any loss in quality.
Lossy codecs, on the other hand, lose a small amount of information, but can compress material into a much smaller format. Lossy codecs are great for compressing data that needs to be sent via e-mail or uploaded to the internet. Use caution when choosing a lossy Codec. Some color shifting is seen in some formats. Overall, all codecs work toward the same end: put your data into a manageable file type with as little loss of quality as possible.
Transformative codecs cut up the material into smaller chunks before actually compressing it. This creates a smaller file.
The most widely recognized family of codecs are based on MPEG standards. This is the organization that sets and codifies the standards. There are a number of primary MPEG formats and a multitude of derivative types. MPEG-1 is a data stream which reproduces with incredibly high quality. One drawback is that MPEG-1 allows only for progressive scanning. Progressive scanning is a method of storing and displaying moving images where all of the lines of the image are drawn in sequence.
This is in contrast to interlaced scanning, where all the odd lines of an image are drawn first, then all of the even lines are drawn. MP3, while lossy and quite small, is the standard for nearly all digital music storage devices, audio players and retail sites. MPEG-4 files use both progressive and interlaced video. It employs better compression techniques than MPEG-1 and is a widely-accepted compression standard. In fact, there are a number of codecs that are derived from MPEG One is the H.
Adjusting size allows users to use this same standard for compressing for broadcast, multimedia usage and large file storage. ProRes is another widely-used codec.
You can find it in several formats like , and RAW. Developers boast that it will handle up to 8K media with superior playback. Superior color resolution is also a main feature. With the glut of Windows users out there, it isno wonder this codec family is so popular. Originally designed to compress files for internet streaming, WMV was introduced as a competitor to the RealVideo compression codec. Be sure to note the difference between a Codec and a container.
So what is a container? It is a lot like the wrapping on a present. Codecs are held within a container which is what is used to combine audio and video into a single file as well as containing metadata.
When looking at a file name, the container is the. Some containers can only support certain codecs while others can support a wide variety of audio and video codecs. It's the containers job to let media players know to play both audio and video at the same time.
There are hundreds of codecs that can be applied in different use cases. Below are a list of some of the most commonly used codecs and their purposes. This codec is also referred to as MPEG-4 and is one of the most commonly used video codecs.
It uses lossy compression and is widely supported in production, post and distribution. Many cameras record in this codec. It is the standard for most web video hosting and Blu-ray. Red Digital Cinema uses their own codec for their cinema cameras. It produces high image quality compression with low loss.
It uses the. Apple ProRes is a serious of both lossless and lossy compression codecs that were designed for intermediate post production work. ProRes is the replacement for the Apple Intermediate codec. ProRes is very popular as an acquisition codec and is widely supported by software companies. This highlights the second fundamental trade-off of lossy compression technologies: quality for decode complexity. That is, the more quality the codec delivers, the harder it is to decode, particularly in interactive applications like video editing.
For example, with DV and Motion-JPEG, each frame was completely self-referential, so you could drag the editing playhead to any frame in the video, and it could decompress in real time. However, with the MPEG-2 based HDV, if you dragged a playhead to a B-frame, the non-linear editor would have to decompress all frames referenced by that B-frame, and those frames could be located before or after that B-frame in the timeline.
On the underpowered computer systems of the day, most working with bit operating systems that could address only 2GB of memory, long-GOP formats caused significant latency which made editing unresponsive.
These include Cineform, Inc. These codecs use solely intra-frame compression techniques for maximum editing responsiveness, and very high data rates for quality retention. The role of the codec here is to capture at as high a quality as possible while meeting the data rate requirements of the on-board storage mechanism.
As mentioned, in this role, these codecs are designed to optimize editing responsiveness and quality. In this role, the codecs must match the data rate mandated by the delivery platform, which in the case of streaming, is very much below the rates used for acquisition. Briefly, container formats, or wrappers, are file formats that can contain specific types of data, including audio, video, closed captioning text, and associated metadata.
Though there are some general-purpose container formats, like QuickTime, most container formats target one aspect of the production and distribution pipeline, like MXF for file-based capture on a camcorder, and FLV and WebM for streaming Flash and WebM content. In some instances, container formats have a single or predominant codec, like Windows Media Video and the WMV container format.
However, most container formats can input multiple codecs. Technically, at least from the ISO side of things, H. One path was through the International Standards Organization, whose standards impact the photography, computer and consumer electronics markets.
The next path was via the International Telecommunications Union, which is the leading United Nations agency for information and communication technology issues, and contributes to standards in the telephone, radio and television markets. The ITU debuted their first video-conferencing related standard, H.
In the same year, the first video-capable iPod shipped, also with H. In , Adobe added H. The only market that H. Otherwise, almost every other market, from iPods to satellite TV, is primarily driven by the H. Finally, since most video is also captured with audio, the audio component must also be addressed. PCM is considered uncompressed, so it may be more properly characterized as a file format, rather than a codec. To preserve quality, most intermediate codecs simply pass through the uncompressed audio as delivered by the camcorder.
0コメント