none
HPC Serialization Type Change

    Frage

  • Hi,

    I am using HPC Pack 2016 in SOA mode, and so I am submitting all my requests to HPC brokers using datacontract serialization.

    I would like to know if it would be possible to change the serialization to use another one (protobuf in my case).

    I know it is possible to change it in a standard WCF case, specifying the service interface [ServiceContract, ProtoContract] and then changing the configuration on client and server side as in defined in the link below:

    https://stackoverflow.com/questions/30798689/how-to-know-if-a-wcf-service-uses-protobuf-net

    Thanks in advance for you response.

    Rgds

    Dienstag, 21. August 2018 09:09

Alle Antworten

  • We haven't looked into this yet, but we suppose it will work but needs some configuration change. Could you tell us your scenario that requires protobuf before we do any investigation?

    Qiufang Shi

    Mittwoch, 22. August 2018 08:35
  • Thank you for your response.

    That would be really nice to make it work using protobuf.

    Most of our requests are lower than 1.5Mo using DataContract which makes our process efficient in term of speed and amount of data transferred. But recently, a new type of requests appeared that are bigger than 20Mo using DataContract and generating a lot of requests (more than 300 000) to be calculated at the same time. We also noted that sending requests to brokers (using several connections in a same process) doesn't seem to go above around 600Mbps but sending the same requests from multiple process (on the same computer) does not have this limitation (we already reached 1.8Gbps from a single computer this way). Are you aware about any throughput limitation by process in HPC ?

    By the way we have been searching for a new way to transfer data in a faster/compressed way and found protobuf. Thanks to it, our 20Mo requests are now about 3.5Mo which would solve (in part) our issue.

    Hope my case is clear enough.

    Thanks for your help.

    Rgds

    Donnerstag, 23. August 2018 09:20
  • Thank you for your response.

    That would be really nice to make it work using protobuf.

    Most of our requests are lower than 1.5Mo using DataContract which makes our process efficient in term of speed and amount of data transferred. But recently, a new type of requests appeared that are bigger than 20Mo using DataContract and generating a lot of requests (more than 300 000) to be calculated at the same time. We also noted that sending requests to brokers (using several connections in a same process) doesn't seem to go above around 600Mbps but sending the same requests from multiple process (on the same computer) does not have this limitation (we already reached 1.8Gbps from a single computer this way). Are you aware about any throughput limitation by process in HPC ?

    By the way we have been searching for a new way to transfer data in a faster/compressed way and found protobuf. Thanks to it, our 20Mo requests are now about 3.5Mo which would solve (in part) our issue.

    Hope my case is clear enough.

    Thanks for your help.

    Rgds

    Hi,

    Did you have time to have a look at that point?

    Rgds

    Mittwoch, 12. September 2018 12:10
  • Hi Alex,

      Currently we are in the final stage of releasing HPC Pack 2016 Update 2. We will look into your request and rely this thread. And through simple research we know that WCF supports using protobuf and HPC pack SOA supports customized binding, thus we believe this is doable but need figure out the details.


    Qiufang Shi

    Dienstag, 18. September 2018 03:21
  • Hi,

    So that's good news! Hope you'll figure it out soon :)

    Thank you

    Rgds

    Dienstag, 18. September 2018 13:58