I see why you're trying to force the client to use a specific port, but after
attempting my idea - I found that it still didn't use the specific port
number. This was what I was attempting. The port property included when
creating the server determines which port is used for the listener. Thought
I'd test it and see if it would work on the client side - I'm afaid not.
I would try and give the clients a high level overview of what they're
seeing. Getting to granular about how networks actually work usually just
leads to giving people headaches.
Here's the code I was checking for the client side if you're interested:
BinaryServerFormatterSinkProvider prov = new
BinaryServerFormatterSinkProvider();
prov.TypeFilterLevel = TypeFilterLevel.Full;
Hashtable props = new Hashtable();
props["port"] = 1111;
ChannelServices.RegisterChannel(new TcpChannel(null, new
BinaryClientFormatterSinkProvider(props, null), prov), false);
IServiceManager manager =
(IServiceManager)Activator.GetObject(typeof(IServiceManager),
"tcp://localhost:7495/MyService");
Post by Ken RossThanks Jeff,
If you have some code handy that would be great - if not I can probably toss
a proof-of-concept together fairly quickly. In my research it appears that
the common opinion is that you *can't* set the client port; it will always
select one at random. Ordinarily I would see this as a good thing - like you
said, it helps eliminate port contention on the client.
My problem stems from a customer's interpretation of this due to what I'm
beginning to suspect is a mis-interpretation of a packet sniffer log. They
were very concerned to discover the client software was appearing to "scan
ports at random" which could be viewed as viral activity. What I now believe
they're seeing is simply the "source" information in the packet header that
indicates the IP/port of the source of the packet. The destination would
always be static (1234 in our example) but the source would be different on
each connection.
Rather than trying to bulldoze a solution to this "problem" I may just need
to put together a very lucid [yet simple] explanation of what they're seeing
and *why* that's not a "port scanning" symptom.
Thanks again for your quick reply!
Ken
Post by Jeff WinnThe firewall would only need to address the port that is opened on the
server, not the port that the client is using. If you're wanting the
client to connect to port 1234 on the server, that's where it will be
connecting. If you're requiring the client to use port 1111, what happens
when that port gets picked up by another application that's running on
your client machine?
That being said, you should be able to create the remoting objects within
your application programmatically and specify which port you want the
client to use. If you need some example code I can probably whip something
up tomorrow to demonstrate how to accomplish this. Whether it will work or
not is yet to be seen, I've never worried about which port the client was
using.
Post by Ken RossWe're using TCP Binary remoting between a client and server and are
trying to control the ports that are being used. We specify the port the
server listens on (1234) and specify the port for the client (1111).
However, it seems that the client is ignoring the 1111 and is selecting a
port at random.
In a perfect world I'd like to specify the same port for both client and
server so only a single port needs to be managed through firewalls but
I'm not even sure this is possible.
Any suggestions/comments are MOST welcome!
Thanks!!
Ken