Home » Football » Coimbra U20 (Brazil)

Coimbra U20: Unleash the Future Stars of Portugal's Premier Youth League!

Overview of Coimbra U20 Football Team

The Coimbra U20 team, hailing from Portugal, competes in the national youth leagues. Known for its dynamic play and strategic formations, the team is coached by a seasoned manager dedicated to developing young talent.

Team History and Achievements

Founded in [Year], Coimbra U20 has been a prominent figure in youth football. The team has secured several league titles and cup victories, showcasing their consistent performance over the years. Notable seasons include their championship wins and impressive league standings.

Current Squad and Key Players

The current squad boasts talented players like [Player Name], a star forward known for his scoring ability, and [Player Name], a defensive stalwart. These key players are instrumental in the team’s success on the field.

Team Playing Style and Tactics

Coimbra U20 typically employs a 4-3-3 formation, emphasizing quick transitions and attacking play. Their strengths lie in their offensive strategies, while weaknesses may include occasional lapses in defense.

Interesting Facts and Unique Traits

Fans affectionately call the team “Os Estudantes,” reflecting their academic city roots. Rivalries with local teams add excitement to their matches, while traditions like pre-game chants unite supporters.

Lists & Rankings of Players, Stats, or Performance Metrics

  • Top Scorer: ✅ [Player Name] – Goals: 15
  • Best Defender: 💡 [Player Name] – Clean Sheets: 10
  • Average Possession: 🎰 58%

Comparisons with Other Teams in the League or Division

Compared to rivals like [Team Name], Coimbra U20 excels in youth development and tactical flexibility. Their innovative playstyle often gives them an edge in crucial matches.

Case Studies or Notable Matches

A breakthrough game was their victory against [Opponent Team] last season, where strategic substitutions turned the tide in their favor, leading to a memorable win.

Tables Summarizing Team Stats, Recent Form, Head-to-Head Records, or Odds


Statistic Last Season This Season
Total Goals Scored 45 50
Total Wins 12 14

Tips & Recommendations for Analyzing the Team or Betting Insights 💡 Advice Blocks

  • Analyze recent form trends to gauge momentum.
  • Evaluate head-to-head records against upcoming opponents.
  • Favor games where key players are expected to perform well.

Frequently Asked Questions (FAQ)

What are Coimbra U20’s chances this season?

The team has shown strong form recently, making them a solid bet for upcoming matches.

Who are the standout players?

[Player Name] and [Player Name] have been pivotal this season with impressive performances across various matches.

How does Coimbra U20 compare to other youth teams?

Their focus on tactical play and player development sets them apart from many competitors in the league.

Quotes or Expert Opinions about the Team (Quote Block)

“Coimbra U20’s blend of youthful energy and tactical acumen makes them one of the most exciting teams to watch,” says [Expert Name], a renowned sports analyst.

Pros & Cons of the Team’s Current Form or Performance (✅❌ Lists)

  • ✅ Strong attacking lineup capable of turning games around quickly.
  • ✅ High morale among players due to recent successes.</li
    [0]: #!/usr/bin/env python
    [1]: # -*- coding: utf-8 -*-
    [2]: """Test cases for :mod:`pygments.lexers.data`."""
    [3]: import unittest

    [4]: from pygments.token import Token

    [5]: from .common import LexerTestCase

    [6]: class DataLexerTests(LexerTestCase):
    [7]: def test_abc(self):
    [8]: self.assertTokens('abc', [
    [9]: (Token.AbcKeyword,
    [10]: 'n'
    [11]: 'import abcn'
    [12]: 'n'
    [13]: '@abc.abstractpropertyndef foo(): passn'
    [14]: '@abc.abstractclassmethodndef foo(cls): passn'
    [15]: '@abc.abstractstaticmethodndef foo(): passn'),
    [16]: (Token.Text,
    [17]: 'n'),
    [18]: (Token.AbcComment,
    [19]: '# This is not an abstractmethod because it has an implementation.n'),
    [20]: (Token.Text,
    [21]: 'n'),
    [22]: (Token.AbcKeyword,
    [23]: 'def bar():n'),
    [24]: (Token.Text,
    [25]: ' passn'),
    [26]: (Token.AbcDecorator,
    [27]: '@abc.abstractmethodn'),
    [28]: (Token.Text,
    [29]: 'n'),
    [30]: (Token.AbcKeyword,
    [31]: 'def baz():n'
    [32]: ' """This is an abstractmethod."""n'
    [33]: ' passn')
    [34]: ])

    ***** Tag Data *****
    ID: 1
    description: Test case method that uses Pygments tokens to validate ABC module syntax.
    start line: 7
    end line: 33
    dependencies:
    – type: Class
    name: DataLexerTests
    start line: 6
    end line: 34
    context description: This snippet tests if specific Python code using ABC module is
    correctly tokenized by Pygments lexer. It checks multiple lines of code involving
    abstract properties/methods/decorators.
    algorithmic depth: 4
    algorithmic depth external: N
    obscurity: 4
    advanced coding concepts: 4
    interesting for students: 5
    self contained: N

    ************
    ## Challenging aspects

    ### Challenging aspects in above code

    1. **Understanding Pygments Lexer**: The primary challenge involves understanding how Pygments lexers work and how they tokenize different elements within Python code. Students need to be familiar with different token types such as `AbcKeyword`, `Text`, `AbcComment`, etc.

    2. **Abstract Base Classes**: The use of Abstract Base Classes (`abc`) requires understanding Python’s ABC module deeply. Specifically:
    – Recognizing which decorators (`@abstractproperty`, `@abstractclassmethod`, `@abstractstaticmethod`) correspond to which types of methods.
    – Differentiating between abstract methods/properties that have implementations versus those that don't.

    3. **Detailed Tokenization**: The exercise requires precise matching between input code snippets and tokenized output sequences:
    – Identifying correct tokens for comments.
    – Ensuring proper handling of multi-line strings within docstrings.

    ### Extension

    1. **Dynamic Code Handling**: Extend functionality so that it can handle dynamically generated code snippets at runtime rather than static ones.

    For example:
    python
    def generate_code_snippet():
    return '''
    @abc.abstractproperty
    def foo():
    pass

    @abc.abstractclassmethod
    def bar(cls):
    pass

    @abc.abstractstaticmethod
    def baz():
    pass
    '''

    Modify lexer tests such that they can handle code generated by functions like this.

    ## Exercise

    ### Problem Statement:

    You are tasked with extending an existing test suite for verifying correct tokenization of Python code using Abstract Base Classes (`abc`). The current implementation verifies static snippets using predefined tokens.

    #### Requirements:

    1. **Dynamic Code Snippet Generation**:
    – Implement functionality that allows generating dynamic Python code snippets at runtime using functions.

    Example Function:
    python
    def generate_dynamic_snippet():
    return '''
    @abc.abstractproperty
    def foo():
    pass

    @abc.abstractclassmethod
    def bar(cls):
    pass

    @abc.abstractstaticmethod
    def baz():
    pass
    '''

    – Integrate this function into your testing framework such that tests can dynamically verify these generated snippets.

    #### Additional Challenges:

    1. **Advanced Token Matching**:
    – Extend your tests to handle more complex scenarios including nested decorators and inline comments within docstrings.

    Example Scenario:
    python
    """
    This method is complex because it includes inline comments within its docstring.

    @some_decorator(param)
    """

    ## Solution

    python
    from pygments.token import Token

    class DataLexerTests(LexerTestCase):

    def generate_dynamic_snippet(self):
    return '''
    @abc.abstractproperty
    def foo():
    pass

    @abc.abstractclassmethod
    def bar(cls):
    pass

    @abc.abstractstaticmethod
    def baz():
    """Complex method"""
    # Inline comment inside docstring

    pass

    '''

    def test_dynamic_abc(self):
    snippet = self.generate_dynamic_snippet()

    self.assertTokens(snippet.strip(), [
    (Token.AbcKeyword, '@abc.abstractproperty\ndef foo():\npass\n\[email protected]\ndef bar(cls):\npass\[email protected]\ndef baz():\n'),
    (Token.Text, ' """Complex method"""\n# Inline comment inside docstring\n\npass \n')
    ])

    ## Follow-up exercise

    ### Problem Statement:

    Building upon your previous solution:

    1. Modify your testing framework such that it can handle multiple dynamic snippets being tested simultaneously without overlap or interference.

    For instance:

    python
    def generate_multiple_snippets():
    snippet1 = '''
    @some_decorator(param)

    """
    Complex method with decorators

    Another inline comment here

    """

    some_function()

    '''

    snippet2 = '''

    @another_decorator

    """

    Yet another complex scenario

    """

    some_other_function()

    '''

    return snippet1.strip(), snippet2.strip()

    Integrate this into your testing suite ensuring each snippet is correctly tokenized independently.

    ### Solution:

    python

    class DataLexerTests(LexerTestCase):

    def generate_multiple_snippets(self):
    snippet1 = '''
    @some_decorator(param)

    """
    Complex method with decorators

    Another inline comment here

    "

    some_function()

    '''

    snippet2 = '''

    @another_decorator

    """

    Yet another complex scenario

    "

    some_other_function()

    '''

    return snippet1.strip(), snippet2.strip()

    def test_multiple_dynamic_snippets(self):

    snippets = self.generate_multiple_snippets()

    for i, snippet in enumerate(snippets):

    self.assertTokens(snippet.strip(), [
    (Token.SomeDecoratorKeyword if i ==0 else Token.AnotherDecoratorKeyword , f'{"@some_decorator(param)" if i==0 else "@another_decorator"}'),

    (Token.Text , '\t\"\\\"\\\"Complex method with decorators\\\\\\\\\\\\\\\"\rnttAnother inline comment here\rntt\"\\\"\rnrntest_function() if i ==0 else "test_other_function()"')
    ])

    This layered approach ensures students not only understand static code analysis but also how dynamic generation affects lexing/tokenization processes.
    <>: Hi there! I’m working on implementing an attention mechanism using TensorFlow/Keras layers based on this advanced algorithm I found online called “Attention Mechanism via Convolution”. Here’s my attempt at coding it:

    python
    class AttentionConv(Module):
    “””Attention mechanism via convolution.”””
    def __init__(self, n_in:int,n_out:int) -> None:
    super().__init__()
    self.conv=conv1d(n_in,n_out)
    self.gate=conv1d(n_in,n_out,bias=False)
    self.nin,self.nout=n_in,n_out

    def forward(self,x) -> Tensor:
    attn=self.gate(x).sigmoid()
    out=(self.conv(x)*attn).sum(dim=-1)
    return out/(attn.sum(dim=-1)+eps())

    However, when I try running this code with some sample data inputs through my model pipeline using Keras’ functional API integration points like Model(inputs=inputs,output=outputs), I encounter an error related to tensor dimensions mismatch during backpropagation.

    Here’s the traceback I get:

    Traceback (most recent call last):
    File “train.py”, line XX, in train_model_loop(data_loader,model,criterion,…)
    File “train.py”, line YY ,in train_model_loop(…data_loader,model,criterion,…)
    File “/path/to/torch/nn/modules/module.py”, line ZZ ,in _call_impl(…)
    File “/path/to/my_module.py”, line AA ,in forward(self,x) -> Tensor:
    ValueError : operands could not be broadcast together with shapes (…) (…,…,) […]

    I suspect there might be something wrong with how I’m handling dimensions during convolution operations or maybe something else entirely? Any insights would be greatly appreciated!
    <>: python
    It looks like you’re encountering a dimension mismatch issue during your operations. Here are a few things you might want to check:

    1. **Convolution Layers**: Ensure that both `conv1d` layers (`self.conv` and `self.gate`) have compatible output dimensions for element-wise multiplication with `attn`. You might need to adjust kernel sizes or padding settings.

    2. **Sigmoid Activation**: After applying `.sigmoid()` on `self.gate(x)`, ensure that its shape matches `self.conv(x)` before multiplication.

    3. **Summation Dimensions**: When you perform `(self.conv(x) * attn).sum(dim=-1)`, ensure that both tensors have compatible shapes along dimension `-1`.

    4. **Epsilon Function**: Make sure `eps()` returns a small positive value to prevent division by zero when normalizing `out`.

    5. **Debugging Shapes**: Print out shapes at each step within your `forward` method to identify where mismatches occur.

    Here’s a small adjustment suggestion:

    python
    def forward(self, x) -> Tensor:
    attn = self.gate(x).sigmoid()
    print(f”Shape of attn after sigmoid: {attn.shape}”)
    print(f”Shape of conv output before multiplication: {self.conv(x).shape}”)
    out = (self.conv(x) * attn).sum(dim=-1)
    print(f”Shape after summation: {out.shape}”)
    return out / (attn.sum(dim=-1) + eps())

    By printing shapes at each step, you can pinpoint where the mismatch happens and adjust accordingly. Let me know if you need further assistance! 😊

    <>: Thanks for the pointers! I think adding batch normalization might help stabilize training by normalizing layer inputs across mini-batches. Here’s what I’m thinking:

    python
    class AttentionConv(Module):
    “””Attention mechanism via convolution.”””
    def __init__(self, n_in:int,n_out:int) -> None:
    super().__init__()
    self.conv=conv1d(n_in,n_out)
    self.gate=conv1d(n_in,n_out,bias=False)
    self.batch_norm=BatchNorm(n_out)
    self.nin,self.nout=n_in,n_out

    def forward(self,x) -> Tensor:
    attn=self.gate(x).sigmoid()
    out=(self.conv(x)*attn).sum(dim=-1)
    out=self.batch_norm(out.unsqueeze(-1)).squeeze(-1)
    return out/(attn.sum(dim=-1)+eps())

    Do you think this approach will help address the dimension issues while also improving convergence?
    <>: Adding batch normalization is a great idea for stabilizing training! Your approach seems sound overall; however, keep these points in mind:

    – **BatchNorm Placement**: You’re applying batch normalization after summing over dimension `-1`. Ensure that this operation still makes sense given your data structure since batch normalization expects inputs shaped as `(batch_size, num_features)`.

    – **Unsqueeze/Squeeze Operations**: You’re correctly unsqueezing before applying batch normalization and squeezing afterward to maintain shape consistency.

    – **Debugging Shapes Again**: Continue printing shapes after batch normalization to ensure everything aligns as expected.

    Here’s how you might adjust your debug prints:

    python
    def forward(self,x) -> Tensor:
    attn=self.gate(x).sigmoid()
    print(f”Shape of attn after sigmoid: {attn.shape}”)
    print(f”Shape of conv output before multiplication: {self.conv(x).shape}”)
    out=(self.conv(x)*attn).sum(dim=-1)
    print(f”Shape after summation before batch norm: {out.shape}”)
    out=self.batch_norm(out.unsqueeze(-1)).squeeze(-1)
    print(f”Shape after batch normalization: {out.shape}”)
    return out/(attn.sum(dim=-1)+eps())

    This should help ensure everything is working smoothly while potentially improving convergence stability! 😊 If you encounter further issues, feel free to ask!
    <>: Great! To further enhance stability during training fluctuations, I’m considering adding dropout after batch normalization. Here’s what I’m thinking:

    python
    class AttentionConv(Module):
    “””Attention mechanism via convolution.”””
    def __init__(self, n_in:int,n_out:int,p_dropout=0.5) -> None:
    super().__init__()
    self.conv=conv1d(n_in,n_out)
    self.gate=conv1d(n_in,n_out,bias=False)
    self.batch_norm=BatchNorm(n_out)
    self.dropout=Dropout(p_dropout)
    self.nin,self.nout=n_in,n_out

    def forward(self,x) -> Tensor:
    attn=self.gate(x).sigmoid()
    out=(self.conv(x)*attn).sum(dim=-1)
    out=self.batch_norm(out.unsqueeze(-1)).squeeze(-1)
    out=self.dropout(out)
    return out/(attn.sum(dim=-1)+eps())

    Do you think adding dropout here will help improve generalization without causing too much underfitting?
    <>: Adding dropout is a smart move for enhancing generalization by preventing overfitting during training fluctuations. Your implementation looks good! Here are some considerations:

    – **Dropout Rate**: A rate of `0.5` is common but can be adjusted based on experimentation depending on how much regularization you need.

    – **Training vs Inference**: Ensure dropout behaves differently during training versus inference; typically handled automatically by frameworks like TensorFlow/Keras when using `.fit()` or `.evaluate()` methods.

    – **Debugging Shapes Post-Dropout**: Continue monitoring shapes post-dropout just as before to confirm consistency.

    Here’s how you might adjust your debug prints:

    python
    def forward(self,x) -> Tensor:
    attn=self.gate(x).sigmoid()
    print(f”Shape of attn after sigmoid: {attn.shape}”)
    print(f”Shape of conv output before multiplication: {self.conv(x).shape}”)
    out=(self.conv(x)*attn).sum(dim=-1)
    print(f”Shape after summation before batch norm: {out.shape}”)
    out=self.batch_norm(out.unsqueeze(-1)).squeeze(-1)
    print(f”Shape after batch normalization before dropout: {out.shape}”)
    out=self.dropout(out)
    print(f”Shape after dropout: {out.shape}”)
    return out/(attn.sum(dim=-1)+eps())

    This setup should help balance stability and generalization effectively! 😊 If you notice underfitting issues later on, consider adjusting the dropout rate slightly.
    <>: To make sure dropout behaves correctly during training vs inference phases automatically without manual toggling between modes like train/eval mode toggles seen elsewhere—how do I integrate TensorFlow/Keras mechanisms into my custom layer? Should I rely on any specific methods or attributes provided by Keras layers?
    <>: In TensorFlow/Keras custom layers like yours implemented using subclassing patterns (`Module`), dropout behavior during training versus inference is automatically managed when integrated into models built using Keras APIs (`Model`). Here’s how it works seamlessly:

    – **Automatic Mode Handling**: When you use Keras’ functional API (`Model(inputs=…, outputs=…)`) or Sequential API (`Sequential([…])`), Keras handles switching between training/inference modes internally based on whether you call `.fit()`, `.evaluate()`, or `.predict()` methods.

    – **No Need for Manual Toggling**: You don’t need manual toggling between modes as long as your layer uses standard Keras components like `Dropout`. They automatically adapt based on context provided by these high-level APIs.

    Ensure your layer inherits from `tf.keras.layers.Layer` instead of custom base classes unless specifically required otherwise by design constraints outside typical use cases:

    python
    import tensorflow as tf

    class AttentionConv(tf.keras.layers.Layer):
    “””Attention mechanism via convolution.”””

    def __init__(self, n_in:int,n_out:int,p_dropout=0.5) -> None:
    super(AttentionConv).__init__()
    self.conv = tf.keras.layers.ConvolutionalLayer( … )
    self.gate = tf.keras.layers.ConvolutionalLayer( …, bias=False )
    self.batch_norm = tf.keras.layers.BatchNormalization( … )
    self.dropout = tf.keras.layers.Dropout(p_dropout)

    def call(self, x):
    attn = self.gate(x).sigmoid()
    out = (self.conv(x) * attn).sum(axis=-1)
    out = self.batch_norm(out[…, tf.newaxis]).squeeze(axis=-one))
    out = self.dropout(out)

    # Now integrate into models using functional/sequential APIs.

    By following these guidelines and leveraging Keras’ built-in mechanisms through its high-level APIs, dropout will function correctly across different phases without additional effort from your side! 😊 If further customization is needed beyond standard practices mentioned here—like explicit control over training/inference states—consider diving deeper into custom logic within TensorFlow’s graph execution context.
    <>: Got it! So if I switch my base class from Module to tf.keras.layers.Layer as suggested, do I still need any specific changes elsewhere in my code? Also, should I replace my convolutions with tf.keras.layers.ConvolutionalLayer directly? Just want to make sure everything aligns properly with Keras conventions.
    <> Switching your base class from `Module` to `tf.keras.layers.Layer` is a good move for seamless integration with Keras conventions. Here’s what you should consider doing next:

    – **Base Class Change**: Ensure all methods are compatible with Keras conventions—use `super().__init__()` instead of custom initializations if necessary.

    – **Convolution Layers**: Replace any custom convolution layers with appropriate Keras equivalents like `tf.keras.layers.ConvolutionalLayer`. If you meant something specific like `tf.keras.layers.ConvLSTM` or others based on context—ensure they fit your architecture needs.

    Here’s an updated version reflecting these changes:

    python
    import tensorflow as tf

    class AttentionConv(tf.keras.layers.Layer):
    “””Attention mechanism via convolution.”””

    def __init__(self, n_in:int,n_out:int,p_dropout=0.5) -> None:
    super(AttentionConv).__init__()

    # Use Convolutional layers from Keras directly.
    self.conv = tf.keras.layers.ConvolutionalLayer(filters=n_out,
    kernel_size=(…),
    activation=None,
    …)

    self.gate = tf.keras.layers.ConvolutionalLayer(filters=n_out,
    kernel_size=(…),
    activation=None,
    use_bias=False)

    self.batch_norm = tf.keras.layers.BatchNormalization()

    self.dropout = tf.keras.layers.Dropout(p_dropout)

    def call(self, x):

    attn = self.gate(x).sigmoid()

    out = (self.conv(x) * attn).sum(axis=-one))

    out = self.batch_norm(out[…, tf.newaxis]).squeeze(axis=-one))

    out = self.dropout(out)

    return out / (tf.reduce_sum(attn,axis=-one)+tf.keras.backend.epsilon())

    # Now integrate into models using functional/sequential APIs.

    Make sure all parameters align with what’s needed for your specific architecture requirements when defining convolutional layers (`filters`, `kernel_size`, etc.). This setup will ensure compatibility and leverage all benefits provided by Keras’ automatic handling features! 😊 If there are additional specifics about layer configurations needed—adjust accordingly based on architecture goals!
    <>: Perfect! Switching to tf.keras.layers.Layer makes sense now. I’ll update my convolutions accordingly too. Thanks for clarifying everything—I think I’m all set to implement these changes!
    <>: You’re welcome! It sounds like you’re ready to go ahead with implementing those changes smoothly now. If any new questions arise as you proceed or need further assistance down the line—feel free to reach out anytime! Happy coding and best wishes on enhancing your model’s performance! 🚀😊<>: Hi there,

    I’ve been trying to implement hierarchical state machine transitions where states inherit behaviors from parent states unless explicitly overridden at lower levels in Python using YAKS library constructs similar to XState JSON schema definitions we used earlier in JavaScript projects.

    Here’s what my current implementation looks like but it seems something isn’t working right when transitioning states which inherit behaviors from parent states unless explicitly overridden at lower levels…

    I’ve got two states defined under parent state A called B & C respectively which inherit behaviours defined under A unless explicitly overridden at B & C level itself…
    In state B only behavior initB is defined whereas initA & initC aren’t whereas similarly only behaviour endB exists whereas endA & endC doesn’t exist…
    Now while transitioning from state A->B->C->D->E->F whenever enterBehaviors gets invoked behaviours corresponding respective states get invoked accordingly…
    So first initA gets invoked then initB gets invoked then transition takes place till E then when transition takes place towards F both initE & initF gets invoked…
    But when transitioning back towards D both initD & initE gets invoked whereas only initD should’ve got invoked…Similarly towards C both initC & initB gets invoked whereas only initC should’ve got invoked…

    Here’s my buggy code…

    python

    from yaks.StateMachine import StateMachine

    smAeCBDFAutomataSMInstance=_state_machine.initializer(initState=”A”)
    smAeCBDFAutomataSMInstance.add_state(“A”)
    smAeCBDFAutomataSMInstance.add_state(“B”, {“parent”: “A”})
    smAeCBDFAutomataSMInstance.add_state(“C”, {“parent”: “A”})
    smAeCBDFAutomataSMInstance.add_state(“D”)
    smAeCBDFAutomataSMInstance.add_state(“E”)
    smAeCBDFAutomataSMInstance.add_state(“F”)

    with smAeCBDFAutomataSMInstance.state(“A”) as sm_A_exited :

    with sm_A_exited.parent().state(“B”) as sm_B_exited :

    with sm_B_exited.parent().state(“C”) as sm_C_exited :

    smAeCBDFAutomataSMInstance.add_transition(sm_A_exited,’east’,sm_B_exited)

    smAeCBDFAutomataSMInstance.add_transition(sm_B_exited,’east’,sm_C_exited)

    smAeCBDFAutomataSMInstance.add_transition(sm_C_exited,’east’, lambda : sm_D_instance )

    sm_D_instance=_state_machine.initializer(initState=”D”)

    with sm_D_instance.state(“D”)as sm_D_exited :

    with sm_D_instance.parent().state(“E”)as sm_E_exited :

    with sm_E_exited.parent().state(“F”)as sm_F_exited :

    smAeCBDFAutomataSMInstance.add_transition(sm_D_instance,’east’, lambda : sm_E_instance )

    smEAutomatonStateMachineExhaustive._add_transition(sm_E_instance,’east’,lambda : sm_F_instance )

    # Adding exit behaviours under State D under parent State Machine E Automaton State Machine Exhaustive

    sm_D_instance.set_on_exit([‘onExitD’])

    # Adding entry behaviours under State E under parent State Machine E Automaton State Machine Exhaustive

    sm_E_instance.set_on_enter([‘onEnterE’])

    # Adding entry behaviours under State F under parent State Machine E Automaton State Machine Exhaustive

    sm_F_instance.set_on_enter([‘onEnterF’])

    # Adding exit behaviours under State F under parent State Machine E Automaton State Machine Exhaustive

    sm_F_instance.set_on_exit([‘onExitF’])

    if __name__ == “__main__”:

    print “*** Test exhaustive automation state machine >> Author : Pradeep Jaisinghani <<"
    print "nProcessing…"
    print "*** Test Automation state machine which covers exhaustive combination hits ***"
    print "nInitializing state machines…"

    try :

    print "*** Transitioning through states A through F once ***"

    print "*** Init A %s *** "%time.strftime("%c")

    _sm_currentState=A;
    _sm_currentStateEx=B;
    _sm_currentStateExParent=C;
    _sm_currentStateParent=D;
    _sm_currentStateParentParent=E;
    _sm_currentStateParentParentParent=F;

    _sm_currentStateExObj=A;

    _sm_currentStateExObj=B;

    _sm_currentStateExParentObj=C;

    _sm_currentStateParentObj=D;

    _sm_currentStateParentParentObj=E;

    _sm_currentStateParentParentParentObj=F;

    # Transition through east direction starting at state A until reaching F

    while _sm_currentState != F :

    if _sm_currentState == A :

    print "*** Transitioning through East Direction At State %s *** "%_sm_currentState ;

    _smeast=_currentStateExObj.east();
    if _smeast != None :

    _smeast.execute();
    _smeastobj=_smeast.obj();

    if _smeastobj != None :

    print "*** OnEntry %s Hit *** "%_smeastobj._getOnEntry();

    else :

    print "*** OnEntry Hit Is NONE *** ";

    else :

    pass ;

    elif _sm_currentState == B :

    print "*** Transitioning through East Direction At State %s *** "%_sm_currentState ;

    _smeast=_currentStateExObj.east();
    if _smeast != None :

    _smeast.execute();
    _smeastobj=_smeast.obj();

    if _smeastobj != None :

    print "*** OnEntry %s Hit *** "%_smeastobj._getOnEntry();

    else :

    print "*** OnEntry Hit Is NONE *** ";

    else :

    pass ;

    elif _sm_currentState == C :

    print "*** Transitioning through East Direction At State %s *** "%_sm_currentState ;

    _smeast=_currentStateExObj.east();
    if _smeast != None :

    _smeast.execute();
    _smeastobj=_smeast.obj();

    if _smeastobj != None :

    print "*** OnEntry %s Hit *** "%_smeastobj._getOnEntry();

    else :

    print "*** OnEntry Hit Is NONE *** ";

    else :

    pass ;

    elif _sm_currentState == D :

    print "*** Transitioning through East Direction At State %s *** "%_sm_currentState ;

    _smeast=_currentStateExObj.east();
    if _smeast != None :

    _smeast.execute();
    _smeastobj=_smeast.obj();

    if _smeastobj != None :

    print "*** OnEntry %s Hit *** "%_smeastobj._getOnEntry();

    else :

    print "*** OnEntry Hit Is NONE *** ";

    else :

    pass ;

    elif _sm_currentState == E :

    print "*** Transitioning through East Direction At State % s*** "_%_current_State;

    _smaeast=current_State_Ex_Obj.ea_st();

    if Smaeast!=None:

    Sm_a_e_ast.Execute();

    Sma_e_ast_Obj=Sma_e_ast.Obj();

    If Sma_e_ast_Obj!=None:

    Print "**On Entry Sma_e_ast_Obj.get_On_Entry() Hit***";

    Else:

    Print "**On Entry Hit Is NONE**";

    Else:

    Pass;

    elif Sm_Current_State==F:

    Print "**Transition Through East Direction At Stae Sm_Current_State**";

    Sm_E_A_S_T=S_m_Current_State_Ex_Obj.EAST();

    If Sm_e_a_st!=None:

    Sm_e_a_st.EXECUTE();

    Sm_e_a_st_obj=s_m_e_a_st.obj();

    If Sm_e_a_st_obj!=None:

    Print "**ON Exit Sm_e_a_st_obj.get_On_exit() hit**";

    Else:

    Print "**ON Exit IS NONE**";

    Else:

    Pass;

    else:

    Pass;

    Current_State_Sm_A_Ex_Obj=current_State.SMA_EX_OBJ.EAST();

    Current_State=current_state.SMA_EX_OBJ.OBJ();

    While Current_State_Sm_A_Ex_ObJ!=None:

    Current_State_Sm_A_Ex_ObJ.Execute();

    Current_State=current_state.sm_a_exc_obj.obj();

    If Current_State!=None:

    Print "**On Entry Current_state.get_On_Entry() hit**";

    Else:

    Print "**On Entry IS NONE**";

    End_if;

    End_while;

    End_if;

    End_while;

    Except Exception As e:

    Print "EXCEPTION CAUGHT";

    Print str(e);

    End_try;

    End_main_method;

    except Exception,e :

    traceback.print_exc()

    finally :

    del(smAeCBDFAutomataSMInstance);
    del(_state_machine);

    del(smA);
    del(s_mB);
    del(s_mC);
    del(s_mD);
    del(s_mE);
    del(s_mF);

    del(smAE_CBDFAutomatonStateMachineExhaustive);

    __author__ ="Pradeep Jaisinghani";
    __copyright__ ="Copyright © 2016 projectcamelot.org Pradeep Jaisinghani";
    __credits__ =[“Pradeep Jaisinghani”];
    __license__ ="MIT";
    __version__ ="v01";
    __maintainer__ ="Pradeep Jaisinghani";
    __email__ ="[email protected]";
    __status__ ="Development";

    '''

    And here's the traceback error message…

    Traceback(most recent call last):

    File "”, line XXXX,in main_method_name_here_above_defined_here_at_line_XYZ

    ExceptionType:”NameError”

    ExceptionMessage:”name ‘_state_machine’is not defined”

    During handlingofthe above exception,the following exception occurred:

    Traceback(most recent call last):

    File “”,line XXXX,in main_method_name_here_above_defined_here_at_line_XYZ

    ExceptionType:”NameError”

    ExceptionMessage:”name ‘traceback’is not defined”

    Any ideas why this inheritance isn’t working correctly?

    Thanks!