![]() |
Embedded CPU advice...
Please, keep this thread clean. I need the voice of experienced programmers here.
So I've finally convinced the right people at work that it's time for us to explore an IP-based alternative to our current audio network architecture, which is essentially circuit-switched TDM OC3. Now the problem is that I need to start coming up with a plan for a proof-of-concept. What I need to build, quite simply, is a box which sits between the ethernet world (1000bT) and an FPGA, shoveling data between the two. It'll need to handle receiving data from the gate array and shipping it out to various multicast destinations, as well as receiving multicast streams, buffering them to ensure they're assembled in order and without dropouts and then sending them to the gate array. If it can also manage to drive a display and take input from a keyboard-like device, all the better. For the FPGA-CPU interface, I'm thinking we'll probably use PCIe and DMA. The Xilinx Spartan 6 XLTs have four PCIe channels in silicon, so that part should be easy. Where I fall on my face, in terms of practical knowledge, is in knowing what it's going to take in terms of CPU power to handle the ethernet side of the equation. In looking around at the SBC / SOM market, I see a lot of relatively powerful Atom-based systems (example), some ARM9 devices (example), and... not much else. Oh, and it's got to be fanless. I don't give a crap about form factor or power requirements, but no fans. Our stuff goes into recording studios, not server rooms. In terms of being able to move a pretty massive amount of multicast packets what's it actually going to take to make this happen? Useful datarate will be in the 150-200 Mb/sec neighborhood, both ways, continuous. That's raw payload only, without overhead, and packet sizes are going to be tiny- in our current architecture, each "packet" is 32 bits per audio stream, delivered serially at a rate of 48,000 per second, and figure 50-100 streams per device, depending on configuration. The tradeoff for larger packet sizes is a larger buffer and more delay, which isn't a good thing in a live broadcast environment. This ain't VOIP. |
ARM9 is your answer.
|
With convection heat pipe cooling...
|
A decently sized heatsink would do, no problems there.
|
do you have a latency limitation?
|
Do you need to filter incoming traffic in any way? As I understand it's all TCP/IP based. If that is the case, I would suggest something like Intel Atom D510 - plenty of power to do some traffic shaping and other more advanced things with minimal power consumption and nominal heat produced. Plus its x86_64 which would make programming so much easier since you can install Linux and use a shitload of pre-existing libraries to help you.
|
Originally Posted by Reverant
(Post 623810)
ARM9 is your answer.
One thing I'm noticing is that most of the Atom & Geode based platforms come with on-board VGA, whereas the ARM9 machines, if anything, include only a simple LVDS driver. Having done a bit more research into Realtime Linux, it seems that with some clever coding, we can probably run the whole user interface on the platform as well, by assigning the network code to run in the realtime kernel, and sending the UI over to the "full" kernel to be run in a best-effort mode.
Originally Posted by JasonC SBB
(Post 623811)
With convection heat pipe cooling...
Originally Posted by y8s
(Post 623883)
do you have a latency limitation?
The big limitation here is in dealing with "live" audio paths, such as between a DJ's microphone and his headphones. In such an environment, the end-to-end delay really needs to be kept down to just a couple of milliseconds, or else the talent will start to perceive comb filtering effects. Presently, we achieve signal latencies in the tens of microseconds for signal paths that are internal to one device, and generally under 200us across a large routed path. Of course, there are a lot of sources for which latency is no bother at all. The path from a device such as CD player or automation computer to air could easily suffer delay in the tens of milliseconds, and something like a satellite receiver or an OB CODEC could tolerate 100 ms or more without anybody noticing. One approach here would be to have variable payload sizes. A "critical" source might be transported in packets of just one or two samples each, whereas a less critical source might be bundled into packets of tens of samples or more prior to transmission. This would lower the workload on the system somewhat. (I wonder if that idea is patentable? Just to be safe, let's say that I reserve all rights to that concept.) |
Originally Posted by UrbanSoot
(Post 623947)
Do you need to filter incoming traffic in any way?
As I understand it's all TCP/IP based. If that is the case, I would suggest something like Intel Atom D510 - plenty of power to do some traffic shaping and other more advanced things with minimal power consumption and nominal heat produced. Plus its x86_64 which would make programming so much easier since you can install Linux and use a shitload of pre-existing libraries to help you. |
There is no OTS porduct that already does what you are talking about? Or would you building it yourself be cheaper than parts+labor for them to pay you to do it?
|
This isn't a personal project, it's a new technology platform for my company's line of networked audio consoles.
So no, there's really no specific product that interfaces directly to our current FPGA-based audio DSP technology and converts it to ethernet. The goal, however, is to find an off-the-shelf single board computer platform which we can write some code for and then integrate it into a box which will also contain the requisite hardware to translate between the PC world and our current OC3-based links. As we move forward, future products will forgo the OC3 interface in favor of integrating directly with the SBC, probably via PCIe. |
I see now, the bigger picture. I didn't clue in that you were trying to add an additional product to your company's current line. I thought you were still doing traveling studio setup thing, didn't realize you were working on audio product engineering now.
|
we've dealt with some companies that do wireless transmission of two video channels over multiple wireless N channels (or a network cable) and we try to hit (wireless) latencies of 10ms or below (less than a frame of video at 60Hz).
most of what we've seen are embedded linux or windows systems that have one or more expresscard slots for the network adapter (future flexibility) but I'm not sure of the processors. Pretty sure the last one I saw didn't have a fan. |
I agree with Slava, for proof of concept or even production it would be hard to beat a dual core atom cpu running at like 1.7Ghz. That should give you MORE than enough power to work with when running a small linux kernal. would also give you a PCI-E slot that you could make a custom interface card for your FPGA interface.
Is size even a consideration or are you looking at a rackmount solution? |
Originally Posted by neogenesis2004
(Post 624020)
I agree with Slava, for proof of concept or even production it would be hard to beat a dual core atom cpu running at like 1.7Ghz.
And obviously, the whole point of a proof-of-concept demo is to get into the general ballpark of the chipset that would be used in the final product. Is size even a consideration or are you looking at a rackmount solution? Ultimately, it would be nice if it were able to be integrated into the audio console itself. I'm assuming that as time marches on, dual-core atom SOMs (or something that's binary-compatible with them) will start showing up. When you are building products with a typical 5-10 year production lifecycle, obsolescence is always a Sword of Damocles. |
Supermicro has a D510-based server motherboard for cheap and it fits perfectly in their 1U half-depth cases ;)
PS: once you are ready for production, you can get very good custom-made server cases from Casetronic. I believe Supermicro does custom to-spec systems too. Anyways, if you are looking for something dirt cheap - look into VIA C7 CPUs as well. Plenty of boards and software support. |
If you used a ION/ION2 atom board you could also use the CUDA dev kit to make use of the geforce core. http://developer.nvidia.com/object/c...downloads.html That should be able to accel greatly at processing audio streams. The Geforce 9400M has 16 stream processors and runs at almost 600MHz. I don't see how you could possibly need more HP.
|
Originally Posted by neogenesis2004
(Post 624188)
If you used a ION/ION2 atom board you could also use the CUDA dev kit to make use of the geforce core.
Originally Posted by UrbanSoot
(Post 624178)
Anyways, if you are looking for something dirt cheap - look into VIA C7 CPUs as well. Plenty of boards and software support.
Originally Posted by UrbanSoot
(Post 624178)
PS: once you are ready for production, you can get very good custom-made server cases from Casetronic.
It occurs to me that most of y'all have no idea what I actually do for a living, so here's some examples of the products we build. First, an oldie, but one of my favorites, the ABX Multitrack console: http://img40.imagefra.me/img/img40/7...jm_f936007.jpg Here's BMX, the mack daddy of major-market on-air boards in its day (recently discontinued). This one's in Ryan Seacrest's studio at E!: http://richmaddox.com/images/DSCN1864.JPG Netwave, one of our newer boards targeted at the "budget conscious" market (ie: it only costs about $15,000): http://img02.imagefra.me/img/img02/7...dm_8f55dd8.jpg (Yes, that is a HUD.) Our "workhorse" console, the RMX. http://img02.imagefra.me/img/img02/7...cm_5e283ef.jpg SMX, a tiny little two-bus mixer for news booths, voicetrack rooms, etc: http://radiomagonline.com/media/0605/506br1109.jpg The Vistamax (well, six of them, plus power supplies). This is the core routing frame. Each one is capable of handling up to 448 audio inputs and outputs, plus providing network connectivity for the consoles: http://img02.imagefra.me/img/img02/7...gm_df86934.jpg |
Originally Posted by Joe Perez
(Post 624218)
Interesting. I'd never really thought about a GPU. Honestly though, I'm very much leaning towards a regular ole' commodity x86-class board that has everything on it we'd need- ethernet MAC, video, PCIe host, USB host, etc.
|
Originally Posted by neogenesis2004
(Post 624238)
You do realize that a ION motherboard is a regular ole' x86 class board right?
I went and looked it up, and it looks like a fine PC board, but we really need a proper SOM-style board, with a single source power supply, I/O on headers rather than back-panel connectors, etc. A desktop-style ATX/ITX/BTX board would be a bit of a pain in the ass to deal with in terms of powering it and interfacing to it. Not impossible, just not my first choice. There's also a certain stigma within the company attached to anything that's obviously a "PC." Can't explain this, it's just how things are. Make it look like a ruggedized industrial controller and folks are fine with it. But if it looks like something you could buy at Fry's or from Dell, it's perceived as cheap and flimsy. |
Originally Posted by Joe Perez
(Post 624360)
No, I didn't. Didn't realize we'd gone in that direction.
|
| All times are GMT -4. The time now is 07:22 AM. |
© 2026 MH Sub I, LLC dba Internet Brands