I have a Python function which sends a callback to an external site (in this case it’s a stockbroker) and organises the replies into a data structure. I also have another function that reads the data structure and performs some analysis.
Note that the analysis does not depend on the completion of the callback, they need to operate independently.
AFAIK there are 2 ways this can be done:
1) Instantiate a
class def __init__() which connects to the broker, sends the callback and implements the parsing logic for any replies
Here’s pseudocode for method 1:
#DATA MODULE class Data(object): def __init__ (self): _connect_to_broker() def get_data(self): data = parse_replies_and_place_into_datastructure return data #MAIN MODULE def analyse_data(): [code to analyse data here] if __name__ == '__main__': data = Data() analyse_data=analyse_data(data.getdata)
Here’s pseudocode for method 2:
# DATA MODULE class Data(object): def connect_to_broker(): [code that connects to broker and sends a callback] def get_data(): data = [code to parse_replies_place_into_datastructure] return data # MAIN MODULE def analyse_data(): [code to analyse data here] if __name__ == '__main__': Data.connect_to_broker() job1=multiprocessing.Process(target=Data.get_data) job2=multiprocessing.Process(target=analyse_data) job1.start() job2.start()
I am guessing method 2 is better but I would like to understand more clearly how multiprocessing differs from an instantiated class that has its own script, especially if both are running in the code (e.g. does the code in the instantiated class stop with the main loop?).
I know this reflects a poor understanding of python locks and so on, but that’s why I’m here 🙂
Also, is there a better way to do this?
Any help is appreciated.