[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Orekit Developers] [SOCIS 2011] New snapshot with event detection



Alexis Robert <alexis.robert@gmail.com> a écrit :

Hi,

Hi Alexis,


So it has been nearly one week since the last snapshot. Event
detection was in fact far more hard to implement that I originally
imagine, so I have a lot of things to tell and I'll try to be concise
:) This snapshot should support it.

Firstly, here is the snapshot URL :
https://www.orekit.org/forge/attachments/download/59/orekit-20110817.apk

Thanks, I downloaded it and tested the events quickly.


Follow the same instructions that for the older snapshots if you want
to install it.

Some details about this snapshot :

* I originally didn't thought that the hardest task of this part would
be just dedicated to drawing a table. On Android, there is no widget
to draw a table. You have "gridview" which has nothing to do with our
needs, and TableLayout which 1. doesn't draw borders 2. doesn't manage
scrollbars

So I ended up writing a widget inheriting TableLayout and using a
well-used hack to draw the borders which is to play with
padding/margins to show a black border (I would be happy to use a more
cleaner method. I managed to have one, but when you add a border to a
cell, you ended up having *two* borders when you have two cells aside,
which is not pretty at all).

After this, I wanted to add scrollbars because there will be some data
here and it may not fit on the screen. On Android there is two
ScrollView : there is HorizontalScrollView and ScrollView for
vertical. BUT if you put an HorizontalScrollView in a ScrollView, the
one on the top absorbs the touch event and you will not be able to
scroll. It's a well-known bug and the official response is "write your
own ScrollView". Which isn't a valid answer because if you take
ScrollView.java and put it into your code, you will see that it
modifies private/protected fields you don't have access to ! (maybe
there is a way, but still, at that moment I was a little bit tired of
already passing 1 day 1/2 on this)

I understand your frustration. Is the horizontal scrollbar necessary on phones ? Can we reduce horizontal width such that only vertical scrolling is needed ? We could perhaps split the table by days (if it span over more than one day) and display only the hours instead of the full date, it would reduce width a lot.


In fact I didn't do that exactly, but TableView (which is our own
widget) embeds some code in ScrollView and re-implements scrolling by
using more low-level helpers given by the basic View class (where all
widgets inherits. btw, on Android, a "widget" you would have on Qt or
other is called a View).

The fun part is that it uses very poorly documented code (essentially
scrollbar initialization) where you only find "Go read the javadocs,
everything is documented !" posts by official Android developer
advocates (the part where on Android there is official API and
internal API made it even worse because I couldn't find how ScrollView
initializes its scrollbar in Android source code, I still don't know
what magic it uses to initialize that, maybe I haven't look enough). I
still managed to do this after some pointers given in a StackOverflow
post, but this whole thing took me two to three days.

Maybe I just didn't saw some easy and nice solution which was just
here, or maybe a better skilled Android developer than me might find
this instantly, but I currently don't know how to draw a table with
borders including horizontal + vertical without doing this.

This widget could be better (for instance adding velocity tracking for
scrolling), but it works :) And it's cleaner than generating HTML code
on the fly and using a webkit widget :)

* The bug where some frames weren't loading because Eclipse doesn't
copy META-INF files from Orekit jar to the apk is still ongoing, but
it was blocking me for using visibility detection from a ground
station (as the way described in the tutorial Java files were needing
these data files). So I made a little workaround to be able to
continue, but I still need to fix this.

This workaround was to copy these data files to the assets/ folder of
the project and to modify Orekit to look inside /assets/ instead of
/META-INF/. This is very crappy, but it's a temporary solution.

I don't think it's crappy. Perhaps our own decision to put these data under META-INF was wrong. I think it came from maven recommendation, I should check again. Anyway, as I see you did embed the data in the apk, it is fine with me. These are internal resources which users don't manage, so we can put them in any convenient place as long as it is bound to the code itself and the users don't have a risk to forget them or to use inconsistent versions of code versus data.


I've looked everywhere to figure out if it's not possible to force
Eclipse to embed this META-INF file, but the only way I found was to
add a custom Ant building script which would call the "aapt" tool
after building to tell him to add the META-INF directory to the final
apk. I didn't do it because I never wrote ant scripts, and I still
want to try finding better ways to do that.

Don't worry about that, your solution is fine.


* Just by the way, there is how Event detection works on the Android
application. I wanted to make it as extensible as possible to enable
you to add events pretty easily.

  1. When you press "add event", it will tell to the Android system to
search for an activity which can respond to the Intent
"org.orekit.android.selector.EVENT". If you have multiple event
detectors, you will have a list of the different events available (now
it's only visibility detection so you will see nothing). You should
also be able to have event detectors declared by applications outside
Orekit (which will act as plugins).

This is a very interesting feature! Orekit provides predefined events, but it is expected that users can add their own events. Having the android application allowing to use them to is a surprising and nice addition to what we thought.

  2. The event activity will start, can request parameters but it's
not mandatory, and should return an instance of EventProxy which is an
abstract class. As the visibility detection event doesn't need any
parameters in this snapshot, the window will be closed before you'll
see it.
  3. This list of EventProxy instances goes to the computation window
  4. The method instance.isNeedingStations() is called and returns
true if this event type requires "ground stations" information
  5. The method instance.load(StationProxy station, EventLog eventlog)
is called, for each station if it requires station info, or one time
with station = null otherwise. It returns an EventDetector instance.

The separation of events and station seems awkward to me. Playing with the application a little, it seems to complicated to use.

In real life space systems, events are used to schedule many things for the mission. Telemetry donwlink and telecommands uplinks are bound to station visibility (we can connect to the satellite only when it is near one of the ground station of this mission network), battery charging using solar arrays are bound to eclipse events (we can't use solar arrays when we don't see the Sun) ... So people are already used to manage events and they already associate stations and visibility events for example. They don't say on one side: "I am interested in visibility events" and then "here are the stations I manage". They would rather say: "I want to manage Aussaguel station events", "I want to manage Kiruna station events", "I want to manage Sun eclips umbra events", "I want to manage Sun eclipse penumbra events" ...

This means that when we add an event to be managed, we configure it completely. If it is a station visibility event, then the station coordinates and name will be configured right from the beginning. This also means that from the user point of view (i.e. in the user interface), the "Aussaguel station events configuration" is not the same instance as the "Kiruna station events detector".

In the example above (with Aussaguel, Kiruna and eclipses), the user would set up 4 events, not 2.

  6. The eventlog parameter is an EventLog instance, which is here to
enable passing back the data back to the UI, it will be described just
below.

EventLog is very simple to use, you have a write method to write a
line in the table. It works that way eventlog.write("Visibility
Detection", new String[]{"key1", "value1", "key2", "value2"}); The
first argument is a tag to tell the user from which event detector the
event comes from, the second argument is an array where the even items
are keys, and the odd ones are values. Keys are column names.

It has been written with the fact that columns might be shared between
event detectors in mind. By the way when you add columns A, B and then
C, B, it will not write A, B, C which may not respect the way the
author wants to display data, but it will write A, C, B (it inserts C
between the existing A and B instead of inserting it at the end). This
is not done when the write() method is called, but when the
renderTable() is called, which will return a String[][] instance
you'll feed to TableView.setTable().

I'm afraid I did not understood what you meant here. Could you explain it again to me ?

Using the application, I found something I forgot to tell. Some events are switches, they correspond to changes of a state. Visibility for example correspond to swiches between visible state and not visible state. Eclipses entry and exit are similar.

There are two different ways to represent these events.
The first way is to display all events chronologically and use one column to identify the type, as follows:

   AUS 2011-08-17T20:01:33  AOS
   PBD 2011-08-17T20:03:15  AOS
   AUS 2011-08-17T20:10:33  LOS
   PBD 2011-08-17T20:10:56  LOS

The second way is to match the event pairs and display the two events in a single line as follows:

   AUS 2011-08-17T20:01:33  AOS 2011-08-17T20:10:33  LOS
   PBD 2011-08-17T20:03:15  AOS 2011-08-17T20:10:56  LOS

The current application is somewhat a mix of both, as it uses one column for AOS and another one for LOS, but use a separate line for both, thus having either an empty first column or an empty second column.

Would it be possible to have either the first or the second approach implemented instead of a mix ? Of course, the best would be to allow the user to select the output type he prefers, but that is probably to complex for this application. I guess the simplest approach is the first one, and it also reduces the width of the display, hence leveraging the need for horizontal scrollbar.


By the way, the algorithm for merging columns is pretty naive (a very
sketchy complexity would be O(|lines| * |columns|^4) but I don't know
if it's even reachable), but this wouldn't be a problem for us as we
shouldn't have a lot of columns.

You are right, the number of columns should be very small (at most 5 or 6, even if we want to add some meta-data later on).


* By the way, when you configure a station, current coarse location
fetching doesn't work on the emulator.

* Event detection has a HUGE problem, which is I'm using ListView to
show the list of events and the list of stations, but when there are
too many stations, you'll have a scroll bar on this ListView. But when
want to be able to deal with small phone screens, I need to put a
ScrollView at the root of the window so you can scroll down to see the
end of the form (already done on Frame form or Impulse maneuver form).
But doing so on this form triggers the bug I told on the beginning of
this mail : the top ScrollView will "absorbs" the touch event and the
ListView will fail scrolling.

The number of stations is often quite small (say a dozen for large ground networks, but sometimes as small as two stations, one for nominal operations and a backup one for contingency cases).


Currently, on a phone you have so much info vertically that you can't
use the form vertically : the listview doesn't have the room to show 1
element, and the "+" button doesn't have the room to show its content.

If we don't split the configuration in one stations list and one events list, I hope we will reduce this effect and have something manageable.


There is two solutions for that :
  1. Reduce the amount of data shown (for instance moving the
"Stations" list to the Visibility detection plugin, even if it may
mean entering station data multiple times if you have plugins which
require )
  2. Move it to a wizard (if I figure out how to do that)
  3. Make our own view which inherits ScrollView to add a protect(View
v) method which measure the position/size of the View v, and will
transmit the TouchEvent to this View v if this event is in this view
instead of absorbing it.

* Also, if you want some benchmarks about event detection on my Nexus
One (1GHz CPU, Android 2.3 with JIT enabled, with a cold Orekit
cache). 56 seconds to load data, 16 seconds to run the simulation.
I've set maxCheck to 5 sec hoping to improve that, but is it ok ?

Yes, it's OK for station visibilities. It could even be set to a larger value like 60 seconds for example. This setting is used as a minimal gap between checks. When set to 5 seconds, it ensure that an AOS followed more than 5 seconds later cannot be missed, but an AOS followed 3 seconds later may be missed (but can also be detected, depending if by chance one of the checks occurs between the vents). for station visibilities, we are often not interested in short duration visibilities. First, we don't have much time to do anything interesting with the satellite (retrieving telemetry, sending telecommands) and second the signal would probably be less reliable since the spacecraft would be only slightly above horizon, at detection threshold. In many operational systems, we consider visiblities only when they are at least one minute long, so we can safely put the maxCheck threshold at 60 seconds and save some computation.

Should I make it a setting ?

It would be interesting, as the maxCheck selection would probably not be the same for other events types like eclipses.


* All the UI changes we discussed are not included, because I wanted
to focus on finishing the features before attacking UI polishing :)

This is fine to me. We are exploring and deciding as we go. I prefer this way to progress and let you decide how you organize your work. As one often reads, open-source is "scratching one's own itch", so it seems right to let you investigate the various parts as you see fits.


By the way, as there is only visibility detection right now, what
would you want me to implement as an event detector in order to try ?
:)

Yes, I would like to have eclipses too. Users should be able to set up a detector for umbra entry/exit (full eclipses) and another detector (if they want) for penumbra entry/exit (partial eclipses).


Have a nice day, and sorry for having wrote such a long text.

No problem, we are here to read it.

best regards,
Luc


If you have any questions, or any suggestions, feel free to ask :)

Alexis Robert





----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.