locked
OCS - LCS coexistence during migration RRS feed

  • Question

  • Hi

     

    During a migration from LCS 2005 SP1 to OCS 2007, I'm having some connection issues between users on the OCS pool and the LCS pool.

     

    This is the setup:

    • One load-balanced LCS 2005 EE SP1 enterprise pool (contoso-lcs), two nodes, separate back-end
    • A single OCS 2007 EE Front-End server, one node, separate back-end and a/v conferencing servers
    • All servers/components are configured with certificates and require (M)TLS.

    Both pools are only configured for internal use. I don't have the hardware load balancer ready yet, so I've configured a secondary IP-address on the OCS front-end server and made the enterprise pool DNS record point to that address.

     

    I've installed the required hotfixes on the LCS servers (KB911996 and KB921543).

     

    Running validation on the front-end server, reveals an error for "Check two-party IM" between an LCS and an OCS user. The error code is "504 Server time-out" and the authentication-info header says "temporarily cannot route" and "the peer actively refused the connection attempt", referring to the LCS front-end server.

     

    The event log on the OCS FE lists related (same?) issues, saying there has been numerous connection failures with the same server, error 8007274D.

     

    For the users, the problem materializes like this:

    • When an LCS user initiates a conversation with an OCS user, it works great. Presence works and messages routes back and forth.
    • When an OCS user initates a conversation with an LCS user, the message shows up in the popup, but when clicked, the full conversation window is empty and further messages in the same conversation fails with "The following message could not be delivered to all recipients, possibly because one or more persons are offline: <the message>".

    The OCS front-end listens on all available IP-addresses. I tried to let it only listen on the enterprise pool address, but that only resulted in failed conversations even when initiated by the LCS user.

     

    How can I find out why the LCS server refuses the connection attempt? Any other things to look out for?

     

    Regards

    Sindre

    Thursday, July 17, 2008 2:15 PM

Answers

  • I found the cause of this issue. The problem was that the LCS pool was configured as an NLB cluster (single-node), and the front-end server listened to only the NLB address. When the OCS server talk directly to the LCS server (OCS users initiating a conversation with an LCS user), it talk directly to the node and not through the NLB interface. When I changed the LCS server to listen to all IP addresses, it worked.

     

    -Sindre

    Monday, July 21, 2008 9:24 AM

All replies

  • Hi

     

    Not the same issues, but worth to try separate pool ip and ocs server ip as said in this post:

    https://blogs.pointbridge.com/Blogs/mcgillen_matt/Pages/Post.aspx?_ID=9

    Thursday, July 17, 2008 2:29 PM
  • Hi

     

    Thanks, but I've already done this, as I mentioned with the secondary IP-address for the enterprise pool.

     

    -Sindre

    Thursday, July 17, 2008 5:58 PM
  • I found the cause of this issue. The problem was that the LCS pool was configured as an NLB cluster (single-node), and the front-end server listened to only the NLB address. When the OCS server talk directly to the LCS server (OCS users initiating a conversation with an LCS user), it talk directly to the node and not through the NLB interface. When I changed the LCS server to listen to all IP addresses, it worked.

     

    -Sindre

    Monday, July 21, 2008 9:24 AM