probnmn.evaluators.joint_training_evaluator¶
-
class
probnmn.evaluators.joint_training_evaluator.
JointTrainingEvaluator
(config: probnmn.config.Config, models: Dict[str, Type[torch.nn.modules.module.Module]], gpu_ids: List[int] = [0], cpu_workers: int = 0)[source]¶ Bases:
probnmn.evaluators._evaluator._Evaluator
Performs evaluation for
joint_training
phase, using batches of evaluation examples fromJointTrainingDataset
.- Parameters
- config: Config
A
Config
object with all the relevant configuration parameters.- models: Dict[str, Type[nn.Module]]
All the models which interact with each other for evaluation. This should come from
JointTrainingTrainer
.- gpu_ids: List[int], optional (default=[0])
List of GPU IDs to use or evaluation,
[-1]
- use CPU.- cpu_workers: int, optional (default = 0)
Number of CPU workers to use for fetching batch examples in dataloader.
Examples
To evaluate a pre-trained checkpoint:
>>> config = Config("config.yaml") # PHASE must be "joint_training" >>> trainer = JointTrainingTrainer(config, serialization_dir="/tmp") >>> trainer.load_checkpoint("/path/to/joint_training_checkpoint.pth") >>> evaluator = JointTrainingEvaluator(config, trainer.models) >>> eval_metrics = evaluator.evaluate(num_batches=50)
-
_do_iteration
(self, batch:Dict[str, Any]) → Dict[str, Any][source]¶ Perform one iteration, given a batch. Take a forward pass to accumulate metrics in
ProgramGenerator
andNeuralModulenetwork
.- Parameters
- batch: Dict[str, Any]
A batch of evaluation examples sampled from dataloader.
- Returns
- Dict[str, Any]
A dictionary containing model predictions and/or batch validation losses of
ProgramGenerator
andNeuralModuleNetwork
. Nested dict structure:{ "program_generator": {"predictions", "loss"} "nmn": {"predictions", "loss"} }