BCM2835 datasheet


190 posts   Page 6 of 8   1 ... 3, 4, 5, 6, 7, 8
by jamesh » Sun Jan 01, 2012 12:07 pm
Kernel version - should be >3.0 I believe.

And I reiterate, I think it's borderline impossible to reverse engineer the GPU code. Actually, take out the borderline bit. It is simply MUCH too compicated, and a pointless exercise.

You could probably reverse engineer the Linux side libraries,but hopefully there won't be any point (on Linux anyway), as the supplied libraries are pretty well optimised already, and are continually worked on at Broadcom, so any reverse engineering would always be playing catchup.

I see the point about people needed acceleration for non-Linux OS's. Not sure what will happen there, as Linux is the Raspi supported platform.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11488
Joined: Sat Jul 30, 2011 7:41 pm
by Benedict White » Sun Jan 01, 2012 1:00 pm
JamesH said:


Kernel version - should be >3.0 I believe.

And I reiterate, I think it's borderline impossible to reverse engineer the GPU code. Actually, take out the borderline bit. It is simply MUCH too compicated, and a pointless exercise.

You could probably reverse engineer the Linux side libraries,but hopefully there won't be any point (on Linux anyway), as the supplied libraries are pretty well optimised already, and are continually worked on at Broadcom, so any reverse engineering would always be playing catchup.

I see the point about people needed acceleration for non-Linux OS's. Not sure what will happen there, as Linux is the Raspi supported platform.


Good news on the kernel version, though supporting all 3.x kernels would be better.

As for reverse engineering, there are a lot of kernel people out there who take a perverse joy in reverse engineering hardware. Look at how long it took someone to win the bounty on making a driver for the XBox Kinnect, 4 hours after release. Whilst it is a simple device by comparison, you would be surprised at how much pleasure it gives some people, perverse or otherwise.

The way Nvidia stops its current version of the GPU being hacked is making sure there are good drivers about for current versions of the kernel, and not insisting on which versions people can use.
Posts: 225
Joined: Sat Dec 24, 2011 12:24 am
by jamesh » Sun Jan 01, 2012 1:09 pm
Benedict White said:


JamesH said:


Kernel version - should be >3.0 I believe.

And I reiterate, I think it's borderline impossible to reverse engineer the GPU code. Actually, take out the borderline bit. It is simply MUCH too compicated, and a pointless exercise.

You could probably reverse engineer the Linux side libraries,but hopefully there won't be any point (on Linux anyway), as the supplied libraries are pretty well optimised already, and are continually worked on at Broadcom, so any reverse engineering would always be playing catchup.

I see the point about people needed acceleration for non-Linux OS's. Not sure what will happen there, as Linux is the Raspi supported platform.


Good news on the kernel version, though supporting all 3.x kernels would be better.

As for reverse engineering, there are a lot of kernel people out there who take a perverse joy in reverse engineering hardware. Look at how long it took someone to win the bounty on making a driver for the XBox Kinnect, 4 hours after release. Whilst it is a simple device by comparison, you would be surprised at how much pleasure it gives some people, perverse or otherwise.

The way Nvidia stops its current version of the GPU being hacked is making sure there are good drivers about for current versions of the kernel, and not insisting on which versions people can use.


Er, I did put > 3.0, so that should cover quite a few versions...The actual number that will be released I don't know yet, but I expect  continual development and increasing kernel numbers as time goes by.

Yes, there are people who love trying to reverse engineer stuff. I'm just not sure they would want to put in the man years of effort on this one. It's not a simple USB protocol.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11488
Joined: Sat Jul 30, 2011 7:41 pm
by Bakul Shah » Sun Jan 01, 2012 2:46 pm
JamesH said:


I see the point about people needed acceleration for non-Linux OS's. Not sure what will happen there, as Linux is the Raspi supported platform.



Non-linux OS developers can one of three things:

1. use a dumb framebuffer and implement 2D ops in the host side s/w.

2. figure out what messages are needed for 2D acceleration by running the same operations on Linux and capturing & analyzing messages passed — not sure how complex this will be + every library or GPU firmware update can potentially break this.

3. find the linux host side dependencies of libEGL.so and libGLES2.so  (or whatever you are calling them) and emulate the linux side code (that is, not any GL code). Things like malloc, free, strcmp, some syscalls , memory mapping, etc. The point of this would be to use the *exact same* Linux Raspi GL libraries — if the non Linux OS can use the same GL API. Not trivial but I think this is doable since the linux side code is well documented and no closed source binaries need to be reverse engineered (which also means we can continue using any updated library code).

Not ideal but these seem to be options (that don't require support from the Foundation or Broadcom). [Edited as numbered lists don't seem to work]
Posts: 292
Joined: Sun Sep 25, 2011 1:25 am
by Benedict White » Sun Jan 01, 2012 3:23 pm
JamesH said:


Er, I did put > 3.0, so that should cover quite a few versions...The actual number that will be released I don't know yet, but I expect  continual development and increasing kernel numbers as time goes by.

Yes, there are people who love trying to reverse engineer stuff. I'm just not sure they would want to put in the man years of effort on this one. It's not a simple USB protocol.


That will teach me to read properly!

The open source Nvidia drivers were not a simple USB protocol either. We will see what they come up with.

I sent you a PM BTW.
Posts: 225
Joined: Sat Dec 24, 2011 12:24 am
by DavidS » Sun Jan 01, 2012 4:19 pm
Last page it was implied that we will not even have the information on how to talk to the binary BLOB?  I am also very interested in writing an OS, though I need video output, and I had assumed that we would have good documentation on sending messages to the BLOB so we can do video at least a little better than a dumb frame buffer.
ARM Assembly Language: For those that want: Simple, Powerful, Easy to learn, and Easy to debug.
User avatar
Posts: 1251
Joined: Thu Dec 15, 2011 6:39 am
Location: USA
by DavidS » Sun Jan 01, 2012 4:22 pm
Does this mean that we will have to create a very slow interface over GPIO and through together a simple ARM based GPU in order to have video acceleration with out Linux?
ARM Assembly Language: For those that want: Simple, Powerful, Easy to learn, and Easy to debug.
User avatar
Posts: 1251
Joined: Thu Dec 15, 2011 6:39 am
Location: USA
by tufty » Sun Jan 01, 2012 8:28 pm
No. GPIO doesn"t come into it. If you, like me, are non-linux and can"t make a layer that looks enough like linux for the linux libraries to work (quite a significant task in of itself), you will have a dumb framebuffer and will have to write your own rendering code. You will not have hardware acceleration, /full stop/. If that"s a problem, you will have to spend ages reverse-engineering the protocol that the gl/vg libraries talk, or buy something else.

Simon.
Posts: 1361
Joined: Sun Sep 11, 2011 2:32 pm
by foo » Sun Jan 01, 2012 11:50 pm
How will you even have a dumb framebuffer if the GPU internals are unknown?  You'll still have to do some reverse engineering just to get it into framebuffer mode and know where/how to draw pixels, and that would be the bootstrap into further discovery.
Posts: 52
Joined: Thu Dec 29, 2011 12:49 am
by jwatte » Mon Jan 02, 2012 12:20 am
Bakul said:


Non-linux OS developers can one of three things:

1. use a dumb framebuffer and implement 2D ops in the host side s/w.



Really? Is this guaranteed? Are there data sheets on how to set up the format and access of the framebuffer, and the video timing?

From what I've seen, even getting that far needs some kind of collaboration from Broadcom. Remember: this is NOT a PC where you can make a couple of VESA BIOS interrupt calls to an option ROM, and a framebuffer of known properties is set up for you.

Perhaps the BLOB can be used from other OS-es; that would be nice. However, that means that each OS needs to provide whatever the environment is that the BLOB needs, which probably includes all kinds of things related to interrupts, MMU hardware, interrupt control (might a problem on an RTOS) etc...
Posts: 87
Joined: Sat Aug 13, 2011 7:28 pm
by Bakul Shah » Mon Jan 02, 2012 12:25 am
tufty said:


No. GPIO doesn"t come into it. If you, like me, are non-linux and can"t make a layer that looks enough like linux for the linux libraries to work (quite a significant task in of itself), you will have a dumb framebuffer and will have to write your own rendering code. You will not have hardware acceleration, /full stop/. If that"s a problem, you will have to spend ages reverse-engineering the protocol that the gl/vg libraries talk, or buy something else.

Simon.


Perhaps naïvely, I am actually hopeful! Once we get access to the GL libraries we will know the scope of the work required to supply missing linux dependencies but this is not unlike what is done to port the BSD TCP/IP stack to embedded OSes (supplying missing BSD bits it relies on).

Just thought of one thing the Foundation can do that'll help us and not be a burden on them or Broadcom: provide lib{EGL,GLES2}.a libraries for static linking! Such a target may already exist in their Makefiles (or will be easy to add).

For your lambdapi project, note that Ypsilon Scheme already has openGL support and it would not be hard to add it to any other Scheme with a decent FFI. [Ypsilon's author is using GL ES2 for his iPad/iPhone pinball games so let's hope there will be a new version of Ypsilon with GL ES2!]
Posts: 292
Joined: Sun Sep 25, 2011 1:25 am
by jwatte » Mon Jan 02, 2012 12:44 am
DavidS said:


as this would be illegal



I wanted to call this out. In most jurisdictions, it is not illegal (as in, "against criminal law" or "you can go to jail") to break a software license term. Software license terms are a matter of civil law, meaning a contract between two parties, and breaking a contract term is not against the law per se. The aggrieved party may be able to take the license to court and bring a civil claim against the breaking party, and that may result in some kind of civil damages, but it's still not "illegal." In most of the western world, only the elected voting body of a country can make laws; a contract (such as a license) only makes contractual agreements.

So, Broadcom can ship a closed-box binary BLOB, saying "this is hardware/firmware/source code." RPi can in turn link against that blob and ship a GPL kernel with BLOB support. This is not illegal.

Then, someone can come to RPi and say "please ship all source used to link that kernel," and they will redistribute the binary BLOB, which is what they used, and I believe that would actually fulfill the GPL requirements. However, I don't have a law degree, so my opinion on that matter isn't worth much.

The question then is: What would happen if someone (presumably the FSF) tried to bring RPi to court? Assuming the GPL is enfoceable (which we don't know for sure), the court might compel RPi to disclose their source code. However, that's just the binary blob, which they already released! No action of RPi can compel some uninvolved third party (such as Broadcom) to take an action under the GPL. The worst that could happen would be for the FSF to tell RPi to stop shipping Linux kernels because they don't like the format of the "source code" used (a k a the binary BLOB). Sure, they might do that, and then a zillion other separate distributors will likely start doing the same thing from all kinds of different countries and jurisdictions.

Taking the GPL to the extreme, something that uses specific hardware register configurations, timings, etc, is "hard linked" against the behavior of that hardware, and if you successfully make that argument, then the VHDL of the hardware chip that the software runs on must also be made available. However, in practice, this never becomes a problem. I would see the BLOB interface as very similar to that case. Also: compare System Management Mode in some SoC x86 implementations like the Geode, or even VMWare. VMWare virtualizes certain hardware ports, but it's not believed that this causes VMWare to become GPL when you run the Linux kernel on it.

This is a very complex licensing area where there are no clear answers, and the answer may be different in each jurisdiction. However, one thing is sure: it is not "illegal" (in most jurisdictions I know about.)
Posts: 87
Joined: Sat Aug 13, 2011 7:28 pm
by Bakul Shah » Mon Jan 02, 2012 12:53 am

foo said:


How will you even have a dumb framebuffer if the GPU internals are unknown?  You'll still have to do some reverse engineering just to get it into framebuffer mode and know where/how to draw pixels, and that would be the bootstrap into further discovery.


IIRC there was a post from someone affiliated with the Raspi Foundation that said information for using a dumb framebuffer will be made available. May you have write individual pixels? I don't really know. We'll just have to wait and let them worry about the details!

Posts: 292
Joined: Sun Sep 25, 2011 1:25 am
by johnbeetem » Mon Jan 02, 2012 1:54 am
jwatte said:

The question then is: What would happen if someone (presumably the FSF) tried to bring RPi to court? Assuming the GPL is enfoceable (which we don't know for sure), the court might compel RPi to disclose their source code. 

IANAL, so this is not a legal opinion or legal advice.  It is simply my understanding of GPL from reading a great deal about Free (Libre) Software and cases related to it.  The Broadcom Binary Blob (BBB) is proprietary and not covered under GPL.  In some cases, linking something into Linux makes it a "derived work" which could make it subject to GPL, but I think that would only apply to the message-passing code and not to the BBB or user-space GPU binaries.

The GPL is enforceable, at least in the USA.  However, the only instances I've seen it being enforced is when copyright holders of GPL code discover that someone is distributing binary versions of their code without offering users the source code.  This is a violation of the GPL license.  In almost all cases, a few letters convince the violator to come into compliance.  In rare cases, violators have to be sued in court and AFIAK the GPL side has always won.  My understanding is that only copyright holders have standing to sue in USA.  The FSF can only sue for GPL code that they hold the copyright to, so they have no standing to sue anyone who releases Broadcom code in binary form only.
User avatar
Posts: 942
Joined: Mon Oct 17, 2011 11:18 pm
Location: The Coast
by tufty » Mon Jan 02, 2012 5:27 pm
foo said:


How will you even have a dumb framebuffer if the GPU internals are unknown?


jwatte said:


Really? Is this guaranteed? Are there data sheets on how to set up the format and access of the framebuffer, and the video timing?

From what I've seen, even getting that far needs some kind of collaboration from Broadcom.


...and so on.

Yes, this is guaranteed.  The Linux patches show exactly how to get a dumb framebuffer set up.  You do not need to know /anything/ about the internals of the GPU to do it, only how to send a message to the GPU through its mailbox queue, and how to do DMA.

Bakul said:


For your lambdapi project, note that Ypsilon Scheme already has openGL support and it would not be hard to add it to any other Scheme with a decent FFI. [Ypsilon's author is using GL ES2 for his iPad/iPhone pinball games so let's hope there will be a new version of Ypsilon with GL ES2!]


FFI, of course, requires a number of things which I don't have, and that I'm not likely to implement.  Firstly, it requires me to have what convention might call a "filesystem".  Secondly, to implement or emulate all the low-level linux gubbins (including not only the low level syscall stuff, but also dynamic loading of libraries from that non-existent filesystem).  It would probably be easier if I were aiming for a more "conventional" OS.

Simon
Posts: 1361
Joined: Sun Sep 11, 2011 2:32 pm
by DavidS » Mon Jan 02, 2012 6:14 pm
Beings that the binary BLOB is not linked into the kernel it is not the GPL issue.  The Kernel modules used for DRI, which as I understand are also distributed as binary only are the issue.
ARM Assembly Language: For those that want: Simple, Powerful, Easy to learn, and Easy to debug.
User avatar
Posts: 1251
Joined: Thu Dec 15, 2011 6:39 am
Location: USA
by DavidS » Mon Jan 02, 2012 6:16 pm
And OK I should have worded my self differently:  It would be a violation of civil contract (and in some jurisdictions trade) law.  Using the word 'illegal' can be misleading.
ARM Assembly Language: For those that want: Simple, Powerful, Easy to learn, and Easy to debug.
User avatar
Posts: 1251
Joined: Thu Dec 15, 2011 6:39 am
Location: USA
by Chromatix » Mon Jan 02, 2012 8:07 pm
Okay, so the GPU is a black box; everything else is "sufficiently open".  Linux is supported officially, everything else is second class.  For €20 a go, I can live with that.

I am interested however in some details of what, precisely, we can do with the GPU in the officially supported environment.  I've worked with VideoCore 3 before - my understanding is that VC4 is a straightforward upgrade of that - and found it remarkably limited.

To wit:

1) Is X11 accelerated, and if so how much - can we use accelerated GL, GLES1, GLES2, VG, XRender, fill, copy, scroll?  Hardware mouse cursor?  How badly does it tank when you go beyond the accelerated features?

2) Can we use GLES/VG *outside* X11?  Wayland would like this, also bare-metal GL apps would be excellent for young experimenters.

3) Never mind the pixel/texel throughput, what is the command throughput?  What about Vsync for animation without tearing?

4) If we want to read pixel data back, I assume we have to copy it over the message-passing bus.  How long does this take?  (Latency and bandwidth.)

5) Is the framebuffer mappable by the CPU?  (VC3 did not allow this.)  If not, how much effort does it take to emulate this for oldskool stuff by uploading raw pixels over the bus?  What's the throughput and latency?  Do we have flexible pixel formats to trade off speed for quality?

6) Video support.  How easy is that to get running?  Can I sit down and write something that plays a H.264 video in an X11 window in an afternoon?  Can it overlay with UI elements on top of and/or surrounding the video?
The key to knowledge is not to rely on people to teach you it.
User avatar
Posts: 430
Joined: Mon Jan 02, 2012 7:00 pm
Location: Helsinki
by jamesh » Mon Jan 02, 2012 8:54 pm
Chromatix said:


Okay, so the GPU is a black box; everything else is "sufficiently open".  Linux is supported officially, everything else is second class.  For €20 a go, I can live with that.

I am interested however in some details of what, precisely, we can do with the GPU in the officially supported environment.  I've worked with VideoCore 3 before - my understanding is that VC4 is a straightforward upgrade of that - and found it remarkably limited.

To wit:

1) Is X11 accelerated, and if so how much - can we use accelerated GL, GLES1, GLES2, VG, XRender, fill, copy, scroll?  Hardware mouse cursor?  How badly does it tank when you go beyond the accelerated features?

2) Can we use GLES/VG *outside* X11?  Wayland would like this, also bare-metal GL apps would be excellent for young experimenters.

3) Never mind the pixel/texel throughput, what is the command throughput?  What about Vsync for animation without tearing?

4) If we want to read pixel data back, I assume we have to copy it over the message-passing bus.  How long does this take?  (Latency and bandwidth.)

5) Is the framebuffer mappable by the CPU?  (VC3 did not allow this.)  If not, how much effort does it take to emulate this for oldskool stuff by uploading raw pixels over the bus?  What's the throughput and latency?  Do we have flexible pixel formats to trade off speed for quality?

6) Video support.  How easy is that to get running?  Can I sit down and write something that plays a H.264 video in an X11 window in an afternoon?  Can it overlay with UI elements on top of and/or surrounding the video?


I'm assuming your experience with the VC3 comes from Nokia. It's still valid, the VC4 on the BRCM2835 is very similar, just much faster, and there is obviously the Arm on the same die.

1) Not yet. I believe there is work in progress for this but will need to check.

2) Yes.

3) Command through put is pretty fast (the communications with the GPU can pass full frames much more than 30fps, and commands are pretty small)

4) If you are using a dumb framebuffer, no. If you are doing something acclerated, yes. But see answer to 3.

5) Yes, dumb framebuffer is mappable from the CPU. VC3 and VC4 coprocessor GPU 's don't have this as there wasn't an Arm to require the mapping! Although I think it could have been implemented over the messaging interface.

6) Technically possible, but I think more work needs to be done on this to handle the X11 side. It certainly done on the Nokia 8...but that's Symbian.

All this stuff is going to get better and better as more code is written at the Broadcom end - note that work is ongoing on this stuff. I'll try and find out more about X11 acceleration as I think that's an interesting topic.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11488
Joined: Sat Jul 30, 2011 7:41 pm
by jamesh » Mon Jan 02, 2012 8:57 pm
GPL : The binary blob isnt Linux so does not contrvene any GPL licencing.

Kernel Driver top GPU : this is linked in to the kernel and is therefore GPL, and will be released as such.

Libraries : At this stage will remain closed, but there is some hope we may be able to get them opened up. As they are libraries that use no GPL code, there is no GPL violation.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11488
Joined: Sat Jul 30, 2011 7:41 pm
by Chromatix » Mon Jan 02, 2012 9:35 pm
Intriguing, the very fact that the framebuffer is CPU accessible is a huge advantage compared to the VC3.  It almost certainly means that acceleration for X11 is not so screamingly urgent, since the software implementation is not too awful in most respects.

It also means that learning about fundamental old-skool graphics techniques is feasible, and likewise graphics techniques that are not accelerated can be implemented reasonably.  That's *really* good news.

However, does this mean that some of the 128 or 256MB RAM is stolen by the GPU, like an integrated chipset?  It's not a dealbreaker but it's something to watch out for when figuring out what these things can do.

Also, on the understanding that the accelerator has to be halted before reading or writing raw pixels, is the framebuffer mapping cacheable by the CPU?  That makes a huge difference in performance for read-modify-write operations like alpha blending or even XOR.  The alternative is double buffering which wastes RAM.
The key to knowledge is not to rely on people to teach you it.
User avatar
Posts: 430
Joined: Mon Jan 02, 2012 7:00 pm
Location: Helsinki
by Bakul Shah » Mon Jan 02, 2012 9:49 pm
tufty said:


Yes, this is guaranteed.  The Linux patches show exactly how to get a dumb framebuffer set up.  You do not need to know /anything/ about the internals of the GPU to do it, only how to send a message to the GPU through its mailbox queue, and how to do DMA.


This stuff needs to be written down in English!


Bakul said:


For your lambdapi project, note that Ypsilon Scheme already has openGL support and it would not be hard to add it to any other Scheme with a decent FFI. [Ypsilon's author is using GL ES2 for his iPad/iPhone pinball games so let's hope there will be a new version of Ypsilon with GL ES2!]


FFI, of course, requires a number of things which I don't have, and that I'm not likely to implement.  Firstly, it requires me to have what convention might call a "filesystem".  Secondly, to implement or emulate all the low-level linux gubbins (including not only the low level syscall stuff, but also dynamic loading of libraries from that non-existent filesystem).  It would probably be easier if I were aiming for a more "conventional" OS.


FFI == foreign function interface -- you need that even if you statically link in code written in another language. If you don't have it, you will end up reinventing pretty much everything (you *can* write code to interface specific C routines but that is just FFI implemented manually!). An ability to link in C code means you can be selective about what support code you have to implement in Scheme (and when).

For cross compiling/linking you already have a filesysem on the host. If/when your code becomes self-hosting, you will need persistent store + symbol->object mapping, at which point you can store any object including .a or .o.

When self hosting you will need conventional dynamic loading if you were to compile Scheme to machine code but not otherwise. In other words, you can statically link in all the compiled C code you will ever need and then only allow Scheme code loading at runtime.
Posts: 292
Joined: Sun Sep 25, 2011 1:25 am
by jamesh » Mon Jan 02, 2012 10:25 pm
Chromatix said:


Intriguing, the very fact that the framebuffer is CPU accessible is a huge advantage compared to the VC3.  It almost certainly means that acceleration for X11 is not so screamingly urgent, since the software implementation is not too awful in most respects.

It also means that learning about fundamental old-skool graphics techniques is feasible, and likewise graphics techniques that are not accelerated can be implemented reasonably.  That's *really* good news.

However, does this mean that some of the 128 or 256MB RAM is stolen by the GPU, like an integrated chipset?  It's not a dealbreaker but it's something to watch out for when figuring out what these things can do.

Also, on the understanding that the accelerator has to be halted before reading or writing raw pixels, is the framebuffer mapping cacheable by the CPU?  That makes a huge difference in performance for read-modify-write operations like alpha blending or even XOR.  The alternative is double buffering which wastes RAM.


Well, this is an integrated chipset, so the framebuffer does take memory from the total. Assume 1080p, that's just under 8MB for the framebuffer.

Not sure about caching. I think the way the GPU works means that the dumb FB is just another bitmap in the system, the accelerated stuff working on a different BM (not necessarily screen sized, or even the same bitdepth), composited together for output to the HDMI, so I don't think acceleration affects dumb FB performance, except when considering the total memory access bandwidth, which you probably won't get near.
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 11488
Joined: Sat Jul 30, 2011 7:41 pm
by sylvan » Mon Jan 02, 2012 11:10 pm
Bakul said:


tufty said:


The Linux patches show exactly how to get a dumb framebuffer set up.


This stuff needs to be written down in English!



Why?

Source code for a working system is unambiguous whilst providing example and documentation as one.  Anything else, including English, is almost always more voluminous and is at best a compromise that leads to erroneous interpretation during implementation.
Posts: 115
Joined: Sun Nov 27, 2011 8:39 pm
by Borg 1.0 » Tue Jan 03, 2012 2:56 am
I'd like to add my $0.02, although I'm not sure if this is the right place/time...

I was interested in another thread regarding using iPad displays with the RasPi. The only information that was helpful was that the BC chip apparently provides DSI outputs, and an adapter of some kind would be required to connect to the LVDS input of the iPad.

So naturally, I looked for the BC datasheet... which doesn't exist. Given that little other documentation exists (apart from broad specs, mouthwatering as they are!), that surprised me.

So my question is, will there be enough information in the RasPi documentation to enable someone to make a DSI-LVDS adapter for those users interested in using the iPad's display?

I realise this is a very specific (and possibly pointless) technical question, which may have been answered elsewhere... In any case, if someone could direct me to an appropriate resource, I'd very much appreciate it.

FWIW, I very much appreciate the work and flexibility that both BC as the manufacturer and the Foundation team as the evangelists have put into this project to get it to where it is today. It's a very fine line to walk (from both ends!), and from the limited bits I've seen discussed, it's being done for the right reasons.

As a designer/developer using someone else's hardware as a basis, I don't mind having a 'black box' in the middle of the schematics, as long as I know what to expect on inputs to and outputs from said black box. If these conform to any standard that I can then learn about from another manufacturer, then I don't mind the two-step workaround. If the black box paradigm covers these signals, though, it does tend to create frustration. Of course, this all depends on how much RasPi documentation there will be available, so I've got my fingers crossed...

I do hope that specific details from outside the black box won't end up as collateral damage, in an effort to protect what's inside the box. Sure, not everyone wants to (or needs to) know what those abilities are, but as someone interested in offering alternative displays, etc, it would be a great pity to have to wait for reverse-engineering to provide answers that could and should exist, without compromising the IP or good faith commitments.

I hope this makes some kind of sense. I'm looking forward to seeing what the RasPi can do, as well as what it can be made to do!
Posts: 35
Joined: Sun Jan 01, 2012 1:02 pm