Discussion:
Remoting ports
(too old to reply)
Ken Ross
2008-06-25 22:09:59 UTC
Permalink
We're using TCP Binary remoting between a client and server and are trying
to control the ports that are being used. We specify the port the server
listens on (1234) and specify the port for the client (1111). However, it
seems that the client is ignoring the 1111 and is selecting a port at
random.

In a perfect world I'd like to specify the same port for both client and
server so only a single port needs to be managed through firewalls but I'm
not even sure this is possible.

Any suggestions/comments are MOST welcome!

Thanks!!

Ken
Jeff Winn
2008-06-26 05:14:25 UTC
Permalink
The firewall would only need to address the port that is opened on the
server, not the port that the client is using. If you're wanting the client
to connect to port 1234 on the server, that's where it will be connecting.
If you're requiring the client to use port 1111, what happens when that port
gets picked up by another application that's running on your client machine?

That being said, you should be able to create the remoting objects within
your application programmatically and specify which port you want the client
to use. If you need some example code I can probably whip something up
tomorrow to demonstrate how to accomplish this. Whether it will work or not
is yet to be seen, I've never worried about which port the client was using.
Post by Ken Ross
We're using TCP Binary remoting between a client and server and are trying
to control the ports that are being used. We specify the port the server
listens on (1234) and specify the port for the client (1111). However, it
seems that the client is ignoring the 1111 and is selecting a port at
random.
In a perfect world I'd like to specify the same port for both client and
server so only a single port needs to be managed through firewalls but I'm
not even sure this is possible.
Any suggestions/comments are MOST welcome!
Thanks!!
Ken
Ken Ross
2008-06-26 13:02:02 UTC
Permalink
Thanks Jeff,

If you have some code handy that would be great - if not I can probably toss
a proof-of-concept together fairly quickly. In my research it appears that
the common opinion is that you *can't* set the client port; it will always
select one at random. Ordinarily I would see this as a good thing - like you
said, it helps eliminate port contention on the client.

My problem stems from a customer's interpretation of this due to what I'm
beginning to suspect is a mis-interpretation of a packet sniffer log. They
were very concerned to discover the client software was appearing to "scan
ports at random" which could be viewed as viral activity. What I now believe
they're seeing is simply the "source" information in the packet header that
indicates the IP/port of the source of the packet. The destination would
always be static (1234 in our example) but the source would be different on
each connection.

Rather than trying to bulldoze a solution to this "problem" I may just need
to put together a very lucid [yet simple] explanation of what they're seeing
and *why* that's not a "port scanning" symptom.

Thanks again for your quick reply!

Ken
Post by Jeff Winn
The firewall would only need to address the port that is opened on the
server, not the port that the client is using. If you're wanting the
client to connect to port 1234 on the server, that's where it will be
connecting. If you're requiring the client to use port 1111, what happens
when that port gets picked up by another application that's running on
your client machine?
That being said, you should be able to create the remoting objects within
your application programmatically and specify which port you want the
client to use. If you need some example code I can probably whip something
up tomorrow to demonstrate how to accomplish this. Whether it will work or
not is yet to be seen, I've never worried about which port the client was
using.
Post by Ken Ross
We're using TCP Binary remoting between a client and server and are
trying to control the ports that are being used. We specify the port the
server listens on (1234) and specify the port for the client (1111).
However, it seems that the client is ignoring the 1111 and is selecting a
port at random.
In a perfect world I'd like to specify the same port for both client and
server so only a single port needs to be managed through firewalls but
I'm not even sure this is possible.
Any suggestions/comments are MOST welcome!
Thanks!!
Ken
Jeff Winn
2008-06-26 15:15:01 UTC
Permalink
I see why you're trying to force the client to use a specific port, but after
attempting my idea - I found that it still didn't use the specific port
number. This was what I was attempting. The port property included when
creating the server determines which port is used for the listener. Thought
I'd test it and see if it would work on the client side - I'm afaid not.

I would try and give the clients a high level overview of what they're
seeing. Getting to granular about how networks actually work usually just
leads to giving people headaches.

Here's the code I was checking for the client side if you're interested:

BinaryServerFormatterSinkProvider prov = new
BinaryServerFormatterSinkProvider();
prov.TypeFilterLevel = TypeFilterLevel.Full;

Hashtable props = new Hashtable();
props["port"] = 1111;

ChannelServices.RegisterChannel(new TcpChannel(null, new
BinaryClientFormatterSinkProvider(props, null), prov), false);

IServiceManager manager =
(IServiceManager)Activator.GetObject(typeof(IServiceManager),
"tcp://localhost:7495/MyService");
Post by Ken Ross
Thanks Jeff,
If you have some code handy that would be great - if not I can probably toss
a proof-of-concept together fairly quickly. In my research it appears that
the common opinion is that you *can't* set the client port; it will always
select one at random. Ordinarily I would see this as a good thing - like you
said, it helps eliminate port contention on the client.
My problem stems from a customer's interpretation of this due to what I'm
beginning to suspect is a mis-interpretation of a packet sniffer log. They
were very concerned to discover the client software was appearing to "scan
ports at random" which could be viewed as viral activity. What I now believe
they're seeing is simply the "source" information in the packet header that
indicates the IP/port of the source of the packet. The destination would
always be static (1234 in our example) but the source would be different on
each connection.
Rather than trying to bulldoze a solution to this "problem" I may just need
to put together a very lucid [yet simple] explanation of what they're seeing
and *why* that's not a "port scanning" symptom.
Thanks again for your quick reply!
Ken
Post by Jeff Winn
The firewall would only need to address the port that is opened on the
server, not the port that the client is using. If you're wanting the
client to connect to port 1234 on the server, that's where it will be
connecting. If you're requiring the client to use port 1111, what happens
when that port gets picked up by another application that's running on
your client machine?
That being said, you should be able to create the remoting objects within
your application programmatically and specify which port you want the
client to use. If you need some example code I can probably whip something
up tomorrow to demonstrate how to accomplish this. Whether it will work or
not is yet to be seen, I've never worried about which port the client was
using.
Post by Ken Ross
We're using TCP Binary remoting between a client and server and are
trying to control the ports that are being used. We specify the port the
server listens on (1234) and specify the port for the client (1111).
However, it seems that the client is ignoring the 1111 and is selecting a
port at random.
In a perfect world I'd like to specify the same port for both client and
server so only a single port needs to be managed through firewalls but
I'm not even sure this is possible.
Any suggestions/comments are MOST welcome!
Thanks!!
Ken
Ken Ross
2008-06-26 18:02:32 UTC
Permalink
Thanks again - really appreciate you jumping in.

Ken
Post by Jeff Winn
I see why you're trying to force the client to use a specific port, but after
attempting my idea - I found that it still didn't use the specific port
number. This was what I was attempting. The port property included when
creating the server determines which port is used for the listener. Thought
I'd test it and see if it would work on the client side - I'm afaid not.
I would try and give the clients a high level overview of what they're
seeing. Getting to granular about how networks actually work usually just
leads to giving people headaches.
BinaryServerFormatterSinkProvider prov = new
BinaryServerFormatterSinkProvider();
prov.TypeFilterLevel = TypeFilterLevel.Full;
Hashtable props = new Hashtable();
props["port"] = 1111;
ChannelServices.RegisterChannel(new TcpChannel(null, new
BinaryClientFormatterSinkProvider(props, null), prov), false);
IServiceManager manager =
(IServiceManager)Activator.GetObject(typeof(IServiceManager),
"tcp://localhost:7495/MyService");
Post by Ken Ross
Thanks Jeff,
If you have some code handy that would be great - if not I can probably toss
a proof-of-concept together fairly quickly. In my research it appears that
the common opinion is that you *can't* set the client port; it will always
select one at random. Ordinarily I would see this as a good thing - like you
said, it helps eliminate port contention on the client.
My problem stems from a customer's interpretation of this due to what I'm
beginning to suspect is a mis-interpretation of a packet sniffer log. They
were very concerned to discover the client software was appearing to "scan
ports at random" which could be viewed as viral activity. What I now believe
they're seeing is simply the "source" information in the packet header that
indicates the IP/port of the source of the packet. The destination would
always be static (1234 in our example) but the source would be different on
each connection.
Rather than trying to bulldoze a solution to this "problem" I may just need
to put together a very lucid [yet simple] explanation of what they're seeing
and *why* that's not a "port scanning" symptom.
Thanks again for your quick reply!
Ken
Post by Jeff Winn
The firewall would only need to address the port that is opened on the
server, not the port that the client is using. If you're wanting the
client to connect to port 1234 on the server, that's where it will be
connecting. If you're requiring the client to use port 1111, what happens
when that port gets picked up by another application that's running on
your client machine?
That being said, you should be able to create the remoting objects within
your application programmatically and specify which port you want the
client to use. If you need some example code I can probably whip something
up tomorrow to demonstrate how to accomplish this. Whether it will work or
not is yet to be seen, I've never worried about which port the client was
using.
Post by Ken Ross
We're using TCP Binary remoting between a client and server and are
trying to control the ports that are being used. We specify the port the
server listens on (1234) and specify the port for the client (1111).
However, it seems that the client is ignoring the 1111 and is selecting a
port at random.
In a perfect world I'd like to specify the same port for both client and
server so only a single port needs to be managed through firewalls but
I'm not even sure this is possible.
Any suggestions/comments are MOST welcome!
Thanks!!
Ken
DeveloperX
2008-06-27 09:40:12 UTC
Permalink
Post by Ken Ross
Thanks again - really appreciate you jumping in.
Ken
Post by Jeff Winn
I see why you're trying to force the client to use a specific port, but after
attempting my idea - I found that it still didn't use the specific port
number. This was what I was attempting. The port property included when
creating the server determines which port is used for the listener. Thought
I'd test it and see if it would work on the client side - I'm afaid not.
I would try and give the clients a high level overview of what they're
seeing. Getting to granular about how networks actually work usually just
leads to giving people headaches.
BinaryServerFormatterSinkProvider prov = new
BinaryServerFormatterSinkProvider();
prov.TypeFilterLevel = TypeFilterLevel.Full;
Hashtable props = new Hashtable();
props["port"] = 1111;
ChannelServices.RegisterChannel(new TcpChannel(null, new
BinaryClientFormatterSinkProvider(props, null), prov), false);
IServiceManager manager =
(IServiceManager)Activator.GetObject(typeof(IServiceManager),
"tcp://localhost:7495/MyService");
Post by Ken Ross
Thanks Jeff,
If you have some code handy that would be great - if not I can probably toss
a proof-of-concept together fairly quickly. In my research it appears that
the common opinion is that you *can't* set the client port; it will always
select one at random. Ordinarily I would see this as a good thing - like you
said, it helps eliminate port contention on the client.
My problem stems from a customer's interpretation of this due to what I'm
beginning to suspect is a mis-interpretation of a packet sniffer log. They
were very concerned to discover the client software was appearing to "scan
ports at random" which could be viewed as viral activity. What I now believe
they're seeing is simply the "source" information in the packet header that
indicates the IP/port of the source of the packet. The destination would
always be static (1234 in our example) but the source would be different on
each connection.
Rather than trying to bulldoze a solution to this "problem" I may just need
to put together a very lucid [yet simple] explanation of what they're seeing
and *why* that's not a "port scanning" symptom.
Thanks again for your quick reply!
Ken
Post by Jeff Winn
The firewall would only need to address the port that is opened on the
server, not the port that the client is using. If you're wanting the
client to connect to port 1234 on the server, that's where it will be
connecting. If you're requiring the client to use port 1111, what happens
when that port gets picked up by another application that's running on
your client machine?
That being said, you should be able to create the remoting objects within
your application programmatically and specify which port you want the
client to use. If you need some example code I can probably whip something
up tomorrow to demonstrate how to accomplish this. Whether it will work or
not is yet to be seen, I've never worried about which port the client was
using.
Post by Ken Ross
We're using TCP Binary remoting between a client and server and are
trying to control the ports that are being used. We specify the port the
server listens on (1234) and specify the port for the client (1111).
However, it seems that the client is ignoring the 1111 and is selecting a
port at random.
In a perfect world I'd like to specify the same port for both client and
server so only a single port needs to be managed through firewalls but
I'm not even sure this is possible.
Any suggestions/comments are MOST welcome!
Thanks!!
Ken
Hi, I've just popped into this group to ask a related question, so
thought I'd tag on here.

I have a solution which uses a SAO singleton shared by clients. So far
this has worked flawlessly. We now have a request to add clients at
one of our satellite offices, but due to the way our organisation
works, they are behind a firewall.

I'm slightly confused about a few things. First I've got two clients
running on my machine and both have a connection to 300001 (server's
port) and then each is also connected to a second port 2714 and 3273
in this case on the server. At my end they are 1527 and 1637
respectively.

Really I'm just looking for general advice on routing this. Would I be
better off using a different transport? How are people dealing with
this at the moment?
Thanks

Loading...