2 Replies Latest reply on Apr 12, 2013 8:28 AM by James Howard

    Dependent object sync workflow

      I've implemented a sync workflow for dependent resources using a special staging adapter like described in this thread.


      My primary goal is to remove all the already synchronized (sent) data from the mobile device.


      Let name the resources the following: Head, Item and Transaction.

      All of them are user-space resources. RhoConnect adapters of Head and Item resources are doing only staging in case of CUD operations. The Transaction adapter's job is to send all the staged data in one call to the backend.


      My sync workflow is the following:

      1. dosync_source(Head.get_source_name)

        Send all head data to Rhoconnect adapter, the adapter saves it to Redis (Store.put_data)

      2. dosync_source(Item.get_source_name)

        Send item data to Rhoconnect adapter, the adapter saves it to Redis (Store.put_data)

      3. Create new Transaction object on the mobile device

      4. dosync_source(Transaction.get_source_name)

        Send the newly created Transaction object to Rhoconnect.

        The adapter then pulls the staged data of Head and Item resources out of Redis (Store.get_data) and sends to the backend in one call.

        If everything went fine then the backend notifies the Rhoconnect server via REST API of the deletions. I verified that the notification is successful, the master documents are empty.

      5. SyncEngine.dosync

        Full sync again in order to get rid of the already sent data from the mobile client's database.


      My problem is that after step 5 the mobile client database still contains all the Head and Item objects. As far as I know Redis data changes originating from RhoConnect's REST API calls will propagate to the mobile devices at next sync. But it does not happen in this case. What could be the problem?

        • Re: Dependent object sync workflow

          The problem is that when you create your objects on device - they all created with temporary IDs (to make them unique). After that, when you do DoSync on Head and Item - you suppose to create real IDs inside of create method of the adapter and return it. But, you only do Store.put_data. After that, in normal scenario - RhoConnect will send the map of real IDs liked to temp IDs - and that's how device knows that new records can be removed from the pending queue. In your case, that step is broken. Final DoSync does return you full set of objects - but the map between real IDs and temp IDs doesn't exist - so device keeps the records - because from its standpoint - records are still not created. And we can not go by comparing the records to be equal - because 2 different records can have the same data and differ only by ID.


          What you should do is use sync associations:





            • Re: Dependent object sync workflow
              James Howard

              I don't see how sync associations can be used in a transactional way -- I believe this was the OP's issue.


              I have the same problem. For example, a parent object that contains child objects. If one of the children cannot be inserted (maybe because of a network issue), but the parent has already been inserted and returned an id, then things start to go very wrong on the device.


              After this kind of failure it is difficult if not impossible to get the device to recover. The remaining unsynced child objects can never resolve because the parent object has already been written and the child did not get the associated id from the parent. and now has an id assigned, but the child object cannot subsequently resolve the id after an initial failure. Sometimes the failure is simply network related and the child object just needs to be resynced, but this is not possible.


              What I have done as a hokey workaround is to store the device object_id from the parent object to my back end db. Then if a child object comes across with a non-integer parent id, then I try to look up the order by the object_id that I stored previously. This is obviously a bad idea, but I have found no other fix.


              The ideal solution for me would be to use the transactional approach as suggested by the thread here. However, I have not found a way to accomplish this in practice.


              Does anyone know how to implement this correctly?