Archived Tech-Notes
Published by: Larry Bloomfield & Jim Mendrala      The following are our current e-mail addresses:
E-mail = or
 We have copied the original Tech-Notes below as it was sent out.  Some of the information may be out of date.


Published by: Larry Bloomfield & Jim Mendrala

E-mail = or

August 14, 2000

Tech Note - 061


Sharing experiences, knowledge, observations or anything relating to Digital Television, Digital Cinema, etc. with fellow engineers and readers is our purpose. Our mission statement, other relative information and this current issue of the Tech-Notes is now posted on our new website:  You can also find our past issues there as well. We had over 1200 visitors; Thanks.  We are growing.  We not have over 810 subscribers. Thanks to our regulars and welcome to the new folks.  This is YOUR forum! 


We need help, suggestions, etc. Visit us at:


Reader comments:


Re: Tech-Notes #60

Larry:   Have just read your latest tech notes regarding the standards making process.  I agree with John and I think what Jim would say.  I have been in many, many meeting with Jim over the years.  You are just absolutely wrong.

Regards, Bill


Response from Larry Bloomfield:  Everyone is entitled to his or her own opinion.  Most of us have expressed ours, now let’s move along for the common good.   Larry 


From: Roy Trumbull

Subject: ENG Shock

PG&E will come to your TV station and do a safety program on electrical hazards. They have a tape that shows what happens when man and electrons meet. Ain't pretty.


(Ed Note:  In checking around, this tape may be available through many electrical public utilities.  Check with your local and see if they have it or have them borrow it from Pacific Gas and Electric (PG&E).)


(ED NoteThe following is in response to a comment made that broadcasters are getting a “free ride” when it comes to DTV.)

From: Mark Schubin

Subject: DTV Success

There's no question that there are some large corporations in the broadcasting business, and it's likely that some of them will make some money out of DTV. But there are some 1600 analog television stations in the U.S., and the vast majority of them are in tiny markets (the top-25 markets cover 50% of U.S. television household, 185 markets cover the other half -- 44 of them with less than 100,000 television households each).

I just opened TV & Cable Factbook to one broadcaster's page. It's KXGN-TV in Glendive, Montana. It is the CBS, Fox, AND NBC affiliate in town. It is owned by Glendive Broadcasting Corp. They transmit on channel 5 at 14.8 kw on an antenna on a 513-foot stick. Their Grade A contour includes the town of Beach, Montana. The Grade B contour gets them to Baker and Sidney -- not quite to Miles City, so they operate a satellite station, KYUS-TV, there (10.4 kw on a 102-foot stick on channel 3). In 1999, according to Nielsen, the station had an average daily circulation of 2600 households, which seems to include 980 from the Miles City satellite.

Now, then, this "free ride" that the government has granted Glendive Broadcasting Corp. requires them, by May 1 of 2002, to purchase and install (assuming no redundancy) two new transmitters (Glendive's will be on channel 15 at 125.6 kw, increasing the power bill eightfold), two new transmitting antennas, two new sets of feed line, possibly two new towers (and the costs and insurance of erecting them), two 8-VSB modulators, two ATSC multiplexers, two Dolby Digital encoders, two MPEG-2 video encoders, and assorted test equipment, accessories, and hardware, just to comply with the FCC regulations. This includes nothing for HDTV, nothing for multicasting, and nothing for datacasting.

No doubt they can expect a big DTV windfall from those 2600 households (who will have to buy new receivers to see the new signals).

For KTNL in Sitka, Alaska (a CBS and Paramount affiliate), the average daily circulation is 682 households.

"Obviously" the powers that be have granted these stations a "free" ride. May the powers that be never grant ME such a "free" ride.

TTFN, Mark


(ED NoteThere has been much talk about programs being streamed over the Internet.  It is something broadcasters, especially with a new digital tool coming their way, should be aware of.  Heather Parram, executive producer for AOL of Big Brother, was recently a guest on Online Tonight, an Internet forum, and stated that the average number of simultaneous streams viewing the house was in the 30,000 range. She said that the biggest peak was just after the show signed off after banishing Will/Mega at nearly 90,000 streams.  Somehow these numbers make sense, given the network promotion, the online promotion on AOL, the media coverage.  Do they really make sense and its it something that will grow?)

From: Jacques Mattheij

Subject: Numbers on Big Brother

Yep, that could be: 56K avg.  90,000 -> 5 Gigabits, assuming the majority of the viewers are AOL people, anyway they can get away with that kind of load by re-transmission to a couple of local points of presence and then from there to their end-users. This is speculative, I don't have any inside info of the way AOL has put their stuff together but we could easily verify this by monitoring where the stream comes from for different regions of the planet.

The idea here is to load up the backbone with only a few streams and then to fan out to the customer base from a point as close to them as possible. Outside of the AOL network the story would be completely different, I have talked to several high volume bandwidth providers over the last couple of months about some other project and it seems to be really hard to find a single point on the internet where you can reliably plug in more then some 600 Mbits, more than that automatically requires geographic splits. There is some real work done in this area right now though, and it looks like before long you'll be able to fairly easily get co-located bandwidth up to a gigabit.  With multiple- colour-lit-fibre, there will be a tremendous surge in available bandwidth as pairs of router ports are upgraded from single-colour to multi-colour.

Note that usually the cost of a connection is the cable, hardly ever the equipment at the endpoints. Fact is these AOL issued figures nicely brings down “all” the claims of millions of streams concurrently that we have seen so far.  Probably they were more on the order of several thousands, taking into account the difference in size between the big brother project and those events. Nice ammo against people making bogus claims, too bad the industry will lose a lot of credibility, I think some sponsors of main web events are going to pick up on this information too and wonder how badly they've been had.



From: C. B. Patel

Subject: Time required to shift the modulation standard.

(Ed NoteTo a comment made by Steven Long of DOD, who said:  “If the FCC and their staff had paid attention a FULL YEAR AGO when the problems with 8VSB began to become legend, then they could have spent this past year commissioning a group to write an American COFDM standard JUST IN CASE the worst case came true.  The development of a ready to go American COFDM standard would cost almost nothing but would have been AN INSURANCE  POLICY FOR THE NATION,”  Patel responds:)

YES, the problems did surface more than a year ago. But so called Miracle chip solutions also came out. So far Motorola admitted about the false claims.

NxtWave is proven wrong but no admission. Now NxtWave proudly claims their next chip NXT2002, due later this year will solve the problems (which, according to their claims, were solved by their first chip).

How do you prove them right or wrong NOW? So the problems continues and delay prolonged!

About considering COFDM, the million dollar question is WHO would have proposed

COFDM a year ago or, even now is going to submit a formal proposal?  And submit to who/where?

Our economy might be booming but the R&D in TV is almost non-exist.  Who is going to spend $$s to support all COFDM tests and other work?  The Budget surplus is one source!

There has to be a process for considering a system. What it could be?

I may be wrong but the only organization that could invite solutions (RFPs) for COFDM or "improving VSB" is the ATSC.

I have read several FCC NPRM. Does the FCC have a process for RFP?    I honestly do not know. I do not know how the FCC would go about asking for a COFDM system.

Does the FCC do anything except accepting a "proposed system" after an organization submits a "system" with all testing completed and with related industry "agreed" in a way, and then putting the proposal out for "comments"?

(as NPRM?)

Best Regards,

C. B. Patel


From: Mark Hyman

Subject: FCC Cites a Higher-Speed Digital Divide

The inner-city low income neighborhoods cited by the FCC as being left behind in the "Higher-Speed Digital Divide” are the same ones that will also be left behind in the "Digital TV Divide" because of the higher incidence of multipath in urban areas and corresponding failed 8-VSB reception due to the low-income, inner-city residents' higher dependence on over-the-air reception on indoor antennas.  The FCC's continued insistence on EXCLUSIVE reliance on 8-VSB hurts the most disadvantaged: inner-city low-income, minority, elderly and immigrants - - those that rely much more heavily on over-the-air than do suburbanites and rural viewers.

Check out the Washington Post article:



Subj:  A few observations on 8-VSB

From: multiple contributors

K. Fitzpatrick

Whether we like it or not, there is an element of nationalistic thinking involved in trying to promote a standard, 8-VSB/ATSC, which has been proven time after time to be less than adequate for a great many of the consumers expected to utilize it.

The current Korean controversy (outlined in the article from the Korea Times, contains an interesting twist since it clearly identifies the Korean government's choice of ATSC as both nationalistic and economic:  it believes choice of ATSC for use in Korea will help Korea to sell product into the American market.  The article also states:  "Also to be taken into consideration is LG Electronics' ownership of Zenith Electronics Corp. of the United States, which has the fundamental ATSC technologies."

The Korean broadcasters' assertions that the ATSC system doesn't adequately serve Korean consumers are falling on deaf governmental ears.  Remarkably similar to the situation here in the U.S....


Al Limberg

One should take into consideration the enormous sums in patent royalties Korean companies have paid to U. S. and Japanese companies with regard to consumer products, somewhere near a billion dollars a year I seem to recall reading.  If there is a chance to even up the flow of patent royalties out of Korea, it is probably more important to the well being of Korean consumers than some reception inconvenience.

You have to learn to think like people do who have to be more frugal than you to get by.  Failure to appreciate the viewpoint of people less well off than oneself is the ultimate conceit many Americans and Northern Europeans indulge themselves in.

Al Limberg


Bob Miller

First the Korean royalties are paid out of funds received from American consumers who buy the Korean products. The Korean manufacturer first covers his cost which includes the royalty and then makes a profit on top of it. The American consumer or some other consumer is paying the royalty.

Now if you still feel sorry for poor Koreans we could pay them the royalties for using the 8-VSB standard. Each year the US Government will send a check to LG Electronics with the proviso that they distribute the money to poor Koreans.

But please, please just because we paid for it doesn't mean we have to use it. Just send them the money and put the 8-VSB technology on the shelf and use COFDM.

Reception inconvenience is what many poor people in our inner cities who can't receive ATSC HDTV experience. I am sure that they are only marginally poorer than the executives of LG Electronics of Korea, but still it is hard for them to pay for cable while the fat broadcasters to broadcast to the affluent suburbanite who can afford $10,000.00 home theater centers monopolize their free airways.

I guess American poor are less deserving than Korean poor.

Anyway send the money and don't use the technology, that way the poor in the inner cities can receive COFDM enabled DTV and the poor in Korea can enjoy all that money I'm sure LG will distribute.

Bob Miller

From: Martin Jacklin

I have to take my hat off to you. This coverage of the US DTV situation is miles ahead of the New York Times, The Wall Street Journal, the broadcast trade press and just about anything else I can think of. It's better than TV.

Folks, I really wish you the very best in your efforts to get the ship back on course. I have every faith that the economic magic of digital television will win through. I have no interest in seeing USA OTA broadcast go down the tubes. Although  ;-) hmm, it'd be nice to see its pipes flowing everywhere.

Nice, Bob (previous observation) - add to that that Korea is seriously considering changing their decision, and although they have Japan as a neighbor, they already use DVB-S for satellite and DVB-C for cable. No way are they going to shift that to "ISDB-S and ISDB-C".

The American public is smarter than a lot of people in its industrial corridors of power, and even on this reflector. As is clear from the CEA's (invisible) consumer sales figures, they have voted for the better technology with their wallets.

There may be some paper tigers in the way, e.g. 8-VSB, but these are after all only paper.  And this is a burning issue.

Martin Jacklin


From: Craig Birkmaier

Subject: The problem with "standards," statutes and regulations 

I received a copy of the newly approved ATSC standard for Data Broadcasting from Mark Richer. First, I'd like to offer my congratulations to Mark and the folks in T3/S13 for their efforts to bring the advantages of data broadcasting to fruition; it has been a long road since Mark asked me to help him instigated this effort during his first stint as Executive Director of the ATSC back in 1997.

Second I would like to express my condolences to everyone, since it appears that the advantages of data broadcasting may be delayed or lost altogether, thanks to the efforts of a few narrow-minded folks who managed to get Representatives Bliley and Tauzin to characterize data broadcasting as a "Deal Breaker."

To be more precise, our representatives told broadcasters that leasing or selling their DTV spectrum for data services would be a deal breaker...that this spectrum was given to them to deliver the benefits of digital television, including HDTV to the American people:

-  Never mind that the television industry was built on the back of similar affiliate agreements, where local stations have leased their channels to television networks to carry programming of national/regional interest;

-  Never mind that Digital TV programs are just another form of data, or that the new data broadcast services will be delivering programming optimized for the TV, including video services like local news, weather and other information services now delivered by NTSC channels;

-  Never mind that the new data broadcast affiliation agreements are written in a manner that still allows the delivery of HDTV programming using virtually the entire 6 MHz channel.

Data broadcasting is a "Deal Breaker" because it might diminish the revenues Congress can get from spectrum auctions, from people who want to use this technology to compete with television broadcasters.  The words of Commerce Committee Chairman Bliley should be place on the tombstone on the grave of Data Broadcasting, which died Tuesday.  "Let me be abundantly clear to the broadcasting community: You asked that Congress provide you with an opportunity to offer HDTV. We did that. Now some of you are getting cold feet. If you want to offer other services with the HDTV spectrum, you should pay for it , like you would in an auction.

In essence Bliley is saying, "Others are willing to pay big bucks for the privilege of getting into the data broadcast business, to deliver new digital media services to the masses...something that you COULD do for free, thus extending the DEAL that trades spectrum for delivering free-to-air television and information services to the masses.

Never mind that these services will NOT BE FREE, because the successful bidders will be collecting new indirect spectrum taxes from their subscribers.

And speaking of new standards, the following press release demonstrates the utter futility of developing standards the old fashioned way, with the long drawn out DUE PROCESS used by traditional standards groups.

For several years I have enjoyed seeing an old friend from the ACATS days, Eric Petajan, at MPEG meetings. Eric was with AT&T Bell Labs when they developed the 720P digital television system that was submitted for consideration by ACATS. He then started working on 3D rendering software including the facial animation techniques eventually adopted as part of the MPEG-4 standard. These software tools (and Eric) were recently spun out of Bell Labs to form a new venture, Face-to-Face.

The facial animation techniques in MPEG-4 will become an international standard sometime in the next year when version 2 of MPEG-4 reaches IS (International standard) status. Unfortunately, as you will see from this press release (below), the rapid march of technology has already rendered this not quite a standard technology obsolete...

Craig Birkmaier


From: Leonardo Chiariglione   Leonardo.Chiariglione@CSELT.IT  

(Ed noteWhen asked by Tech-Notes: “How do you see MPEG-4 fitting into the life of a typical television station, if at all?  Comments are as follows:)

This question would require a long answer. I can say that MPEG-4 is the natural extension of TV into interactivity - picture-based not character-based hyperlinks. It should be clear now that putting a web browser on a TV does not make any sense. In my lab we have interesting examples of what you can do with MPEG-4 over MPEG-2. At you can see some (old) examples

(Ed note:  To Birkmaier’s comments, Chiariglione responds:)

The facial animation techniques in MPEG-4 will become an international standard sometime in the next year when version 2 of MPEG-4 reaches IS (International standard) status. Unfortunately, as you will see, the rapid march of technology has already rendered this not quite a standard technology obsolete... A desirable starting point when making sweeping assertions is to first get the facts right.

  1. MPEG-4 version 2 has been approved 8 months ago in Maui.
  2. Facial animation is part of version 1, approved 22 months ago.
  3. Last July we received 3 pre-submissions in response to the call for the new work item on 3D Model Coding.  I will not comment the specifics of the statement that triggered the assertion but will only make the following general observations:

            1.               Where is the epoch marking 500 times compression of video brought to the attention of this reflector? It has               taken over world, right?

2.                  Improvement comparisons expressed as integer numbers are suspect. I would suggest that the next announcement says 12.3 times better. It gives a more professional appearance.

3.                  MPEG standards do not define the encoder, only the decoder. Therefore it is meaningless to say "my technology is n times better than MPEG's". The only possible statement is "my solution is n times better than my implementation of MPEG". In making such a statement, however, one risks disclosing one's immature understanding of the MPEG standard.  "Caveat emptor" was the landmark verdict of a 16th century English judge who  was asked to decide on the complaint of a farmer who had discovered, after  he had paid for some sheep, that they were not of the performance claimed by the seller.  It would help to adopt the caution recommended by the English judge when buying a press release.

Leonardo Chiariglione 


The press release in question

From: Business Wire

Compression Algorithm Opens the Door to Widespread 3-D Application; Technique Compresses Geometric Data 12 Times More Efficiently Than MPEG4 Standard

NEW ORLEANS--(BUSINESS WIRE) -- Computer scientists from Bell Labs, the research and development arm of Lucent Technologies (NYSE: LU), and the California Institute of Technology have developed the first technique that makes it practical to transmit detailed three-dimensional data on the Internet and to work with this kind of data on personal computers.

At the SIGGRAPH 2000 Conference here this week, the researchers are announcing a breakthrough algorithm for what people in this field call "digital geometry compression." The breakthrough could have an impact in fields as diverse as manufacturing, entertainment, medicine, education and retail sales. Geometry in this sense refers to geometric representations of objects - anything from aircraft parts to cartoon characters - detailed information about size and shape with which 3-D virtual objects can be displayed, measured and manipulated. Digital geometric data is typically acquired by 3-D  laser scanning and represents objects using dense meshes of millions or even billions of triangles.

The compression challenge is to use the fewest possible bits to store and transmit these huge, complex data sets, which do not yield to the kinds of processing techniques that have made digital audio, image and video applications commonplace. Efficient geometry compression - delivering the same quality with fewer bits or higher quality with the same bit budget - could supercharge 3-D applications found today at the high end of manufacturing and film making. It also could unlock the potential of high-end 3-D on consumer systems.

The researchers - led by Wim Sweldens of Bell Labs' Mathematical Sciences Research Center and Professor Peter Schroeder of Caltech's Computer Science Department, who is currently on leave at Bell Labs - report that their technique for geometry compression is 12 times more efficient than the method standardized in MPEG4 and six times more efficient than the best previously published method.

The scientific results presented this week could solve problems in every area of geometry processing technology, from data acquisition by 3-D scanning to noise removal, storage, transmission, authentication, editing and reproduction.

Several aspects of the Bell Labs / Caltech approach set it apart from other research in digital geometry processing. One is the team's original use of wavelet transformation, "wavelets" for short, a mathematical technique that has solved a surprising variety of practical problems since its emergence in the early 1980s. Wavelet transformation is complementary to Fourier transforms, long-established techniques for processing signals and analyzing physical data.

"Geometry is poised to become the fourth wave of digital multimedia communication," Sweldens said. "The first three waves - sound in the 1970s, images in the '80s, and video in the '90s - were enabled by signal processing based on Fourier transforms. This kind of signal processing simply cannot handle geometry. Wavelets can."

In fact, the first generation of wavelets, which were built on Fourier transforms, did not handle the geometry of curved surfaces well. One of Sweldens's earlier fundamental contributions was the development of a technique called "lifting," an efficient way to generate wavelets without Fourier transforms. (Although developed with geometry in mind, lifting proved to be effective in other areas as well; it was recently incorporated into the JPEG 2000 standard for image compression.)

Producers of animated films and video games are expected to be among the early adopters of wavelet-based geometry compression. "Imagine a multiplayer, Internet-based video game that looks as good as Toy Story," Schroeder said. But the potential applications go far beyond entertainment.

"Manufacturing companies that can justify a huge investment in systems for 3-D scanning and digital geometry processing have already begun using this technology to create virtual parts catalogues," Schroeder said. "They can use geometric representations when they put out requests for parts, use geometry to guide fabrication equipment, and compare scans of newly made parts to the original designs. Now, if you drastically reduce the cost of this technology while improving the quality of applications, geometry processing is likely to be used in many more parts of the manufacturer's enterprise, from design to sales and order fulfillment. Also, the technology becomes something that small manufacturers, potentially every manufacturer, can and will use."

Mass customization is another likely application. For example, a clothing company might take 3-D scans of customers, transmit the geometric representations to a factory, and ship tailored goods to the customers' homes. Though tools for working with geometry are being developed first by and for manufacturers, film makers and other high-end users, consumer applications may not lag far behind. "Think of real estate," Sweldens said. "Today someone selling a house puts pictures of all the rooms on the Web. Soon the seller may be putting a video walkthrough of the house on the Web. When geometry processing reaches the desktop - in software like today's digital photo and video editors - you'll not only be able to see any view of any room in the house, but you'll also be able to see how it will look after you knock out a wall, repaint the rooms, and drop in new furniture from a 3-D catalogue."

Improvements in digital geometry compression, which are measured in terms of the number of bits per vertex needed to describe a mesh of triangles within a given margin of error, can be exploited in the same ways as gains in other kinds of compression. Application designers, and ultimately end users, will be able to trade off bits or bandwidth for the quality of 3-D representations. Tested against other approaches, the Bell Labs / Caltech method proved to be superior across the board and especially effective in enabling high-quality reproduction with relatively few bits.

This is the sixth year running that the extremely competitive technical program of the annual SIGGRAPH conference - considered the premiere international event showcasing scientific research and new developments in computer graphics and interactive technology - has featured papers by Sweldens, Schroeder, and their collaborators. The results published this week augment an increasingly complete toolbox for digital geometry processing that the team has been developing since 1994.

The researchers' latest breakthrough in compression was built on their earlier achievements, including the generalization of wavelets to represent spherical data and arbitrary geometries. It also exploited their research on meshes, particularly their insight that two of three types of coordinates used to describe a mesh consume a large fraction of the bit budget but contribute very little to quality. Another key element was the collaborators' original contribution to "subdivision," a novel way of building smooth surfaces.

Like other areas of wavelet research - which is known for bringing together mathematicians and computer scientists, theorists and engineers - digital geometry processing has inspired collaboration across boundaries that sometimes separate disciplines and institutions. Collaboration between Sweldens and Professor Ingrid Daubechies of Princeton University has focused primarily on the theoretical side of wavelets, yet has had an impact on the applied side as well. In addition to Sweldens and Schroeder, collaborators who have contributed to the current work are: Andrei Khodakovsky and Igor Guskov at Caltech; Kiril Vidimce at Mississippi State University; David Dobkin and Aaron Lee at Princeton; and Lawrence Cowsar at Bell Labs, Lucent Technologies.

More information on the annual SIGGRAPH conference can be found at The papers "Progressive Geometry Compression" and "Normal Meshes" are available at > and

Bell Labs is celebrating its 75th anniversary this year. One of the most innovative R&D entities in the world, Bell Labs has generated more than 40,000 inventions since 1925. It has played a pivotal role in inventing or perfecting key communications technologies for most of the 20th century, including transistors, digital networking and signal processing, lasers and fiber-optic communications systems, communications satellites, cellular telephony, electronic switching of calls, touch-tone dialing, and modems. 

Today, Bell Labs continues to draw some of the best scientific minds. With more than 30,000 employees located in 25 countries, it is the largest R&D organization in the world dedicated to communications and the world's leading source of new communications technologies. In a recent report, Technology Review magazine said Bell Labs patents had the greatest impact on telecommunications for 1999.

Lucent Technologies, headquartered in Murray Hill, N.J., USA, designs and delivers the systems, software, silicon and services for next-generation communications networks for service providers and enterprises. Backed by the research and development of Bell Labs, Lucent focuses on high-growth areas such as broadband and mobile Internet infrastructure; communications software; communications semiconductors and optoelectronics; Web-based enterprise solutions that link private and public networks; and professional network design and consulting services. For more information on Lucent Technologies and Bell Labs, visit the Web sites and

Founded in 1891, Caltech has an enrollment of some 1900 students, and an academic staff of about 280 professorial faculty and 130 research faculty. The Institute has more than 19,000 alumni. Caltech employs a staff of more than 2000 on campus and 4700 at JPL.

Over the years, 28 Nobel Prizes and four Crafoord Prizes have been awarded to faculty members and alumni. Forty-five Caltech faculty members and alumni have received the National Medal of Science; and eight alumni (two of whom are also trustees), two additional trustees, and one faculty member have won the National Medal of Technology. Since 1958, 13 faculty members have received the annual California Scientist of the Year award. On the Caltech faculty there are 77 fellows of the American Academy of Arts and Sciences; and on the faculty and Board of Trustees, 69 members of the National Academy of Sciences and 48 members of the National Academy of Engineering. For more information on Caltech, visit its Web site at


Subject: Lower bit rate encoders

From: Craig Birkmaier

Tom McMahon wrote: “There are some companies building real, viable VOD businesses around WMT MPEG-4 streams in the 700Kbit range.”  I agree with Tom that the Windows Media Technology codec looks very good in the 700Kbps range. This is also true for he Sorensen codec in QuickTime. Real 7.0 claims similar quality, but I have not seen this with my own eyes so I cannot make a valid comparison.

The MPEG-4 codec also produces good quality at this bit rate, but I'm not sure it is as good as the others, which raises an important bone to pick with Tom. I think it would be a good idea to stop using the terms Windows Media codec and MPEG-4 streams in the same sentence.

The WMT codec has common roots with the MPEG-4 codec, but they have evolved down separate paths. Windows Media bit streams are not compliant with an MPEG-4 decoder as they use a completely different streaming format (ASF); furthermore, I believe the WMT codec has evolved beyond the MPEG-4 video codec implementation. This is in the same I saw a report from the recent Bejing MPEG meetings about the comparative subjective testing of codecs submitted for evaluation in response to a call for new codec technology by MPEG.

I believe that the Real 7 and Windows media codecs were tested and a third, possibly the Sorensen codec. Perhaps Leonardo Chiariglione can provide some additional details about the testing and the results. As I recall all offered quality that equaled or exceeded that of the MPEG-4 video codec. Also, I believe that the ITU is working on new codec technologies under H.263, and that this work may yield solutions that produce excellent results in the sub 1 Mbps range.



From: Leonardo Chiariglione  Leonardo.Chiariglione@CSELT.IT, (and others as noted)

Windows Media bit streams are not compliant with an MPEG-4 decoder as they use a completely different streaming format (ASF): This is not true. MPEG-4 is a toolkit standard and you can take the MPEG-4 tools that suit your needs and go elsewhere for other tools (not that I recommend it). MPEG-4 has developed its own file format that can be used to stream MPEG-4 content but anybody is free to use a different one. The only advice I could give here is that the correct expression is to indicate what part of MPEG-4 a company claims conformance with (and, by the way, adding the specific version used).

I believe the WMT codec has evolved beyond the MPEG-4 video codec implementation No comment. It may be worthwhile to point out, however, that the patent statements that were received by patent rights holders commit those holders to grant license on fair and reasonable terms and non-discriminatory conditions for products conforming to the standard. No commitment was made to grant license for non-conforming products. This applies to the reference code as well. As I recall all offered quality that equaled or exceeded that of the MPEG-4 video codec. When making comparisons one must make sure that the terms of comparison are of the same type. The "call for evidence" that MPEG issued in Maui asked proponents to use their own algorithms (to the degree of optimization that they felt opportune for the purpose) under well-specified conditions as one element of comparison and the MPEG-4 Video reference software as the other element. It is well known that the MPEG-4 reference software of the encoder (as well as all other pieces of reference software that MPEG has developed over the years, including e.g. MP3) only provides syntactically correct bitstreams (useable, e.g. for conformance testing purposes) not state-of-the-art bistreams. The MPEG-4 reference software of the encoder is _not_ optimized, as encoder optimization is where companies have a competitive advantage and no one felt compelled to donate that competitive advantage to ISO, i.e. to their competitors. This unbalance in the comparison was accepted because the purpose of the call was not to run a "beauty contest" but to get _evidence_ about the existence of something new, worth considering in a real call. The video group in Beijing came to the conclusion that there _may_ be something worth considering (note that not all proponents disclosed their algorithms) and that a Call for Proposals will be issued in October. A draft of the Call can be found at The draft makes clear that, as a result of the evaluations following the formal subjective tests using optimized MPEG-4 Video encoders, MPEG may decide to do nothing, extend MPEG-4 Video or develop a new video coding standard.

Leonardo Chiariglione


The Tech-Notes are published by Larry Bloomfield and Jim Mendrala. We can be reached by either e-mail (above) or land lines (408) 778-3412, (661) 294-1049 or fax at (419) 710-1913 or (419) 793-8340. The opinions expressed herein are those of the individual authors and do not necessarily reflect the opinions or positions of their friends, employers or associates.  If you wish to remove yourself from this list, send an E-mail to:  In the subject area put the word Remove

Please visit our web page to review our policies and to see any additional information.



Back to Home Page