Task #3189
closed
- Assignee set to Guilhem Moulin
Any chance you can get Lubos Lunak (l.lunak@collabora.com) some shell access
Can't find him in pillar so we need an SSH pubkey and a password digest for the shadow(5) database (use for instance `mkpasswd -m SHA-512`, from the ‘whois’ package on Debian). For obvious reasons please use hostmaster@documentfoundation.org for the latter (or both).
Might not need the password for read-only access though; but lacking an SSH key anyway.
Done Luboš Luňák, you should be able to login to `llunak@crashreport.libreoffice.org` (~/.ssh/known_hosts snippet attached) with that key, and run `psql crashreport`. You have read-only access to these tables:
base_version
crashsubmit_uploadedcrash
processor_*
symbols_*
Look at the `processor_processedcrash.raw` column for the crash_id or cpu_info you're interested in. There is no index on cpu_info though so filtering on it is gonna be terribly slow.
- Status changed from New to Closed
Would it be possible to get access to the actual dumps? The processed info lacks many things that are included in the dumps, such as the actual /proc/cpuinfo content. And figuring out things like sse2 would be much simpler if I could simply get it directly for the entries rather than somehow painfully try to figure that out from the CPU type.
My understanding is that the 'Raw' output is all we have from the client eg. the content of the 'raw dump' tab here:
https://crashreport.libreoffice.org/stats/crash_details/a735081d-6347-4f46-a9c8-dfaf4347b69e
And that anything else in the UI is built by looking up that address data in debugging symbols on the server.
I'd be surprised if we have more than what is there; I agree it is really sparse, and we'll need to do quite some typing and database of family/model/stepping etc. to determine CPU features - but there it is; we can't re-generate that data easily at all.
The crashsubmit_uploadedcrash database refers to .dmp files in /srv/crashreport/temp, so I expect that's what we get from clients. It seems we keep only a month's worth of backlog, but analyzing that should be still better than doing all that manual work based just on the subset of data we extract from it.
Also available in: Atom
PDF