Skip to content

API Reference

Here you can find the API reference for the kernpy package. The reference contains a detailed description of the kernpy API. The reference describes how the methods work and which parameters can be used.

Find the basics of the kernpy package in the Tutorial.

kernpy

=====

Python Humdrum kern and mens utilities package.

Execute the following command to run kernpy as a module:

python -m kernpy --help
python -m kernpy <command> <options>

Intervals = {-2: 'dd1', -1: 'd1', 0: 'P1', 1: 'A1', 2: 'AA1', 3: 'dd2', 4: 'd2', 5: 'm2', 6: 'M2', 7: 'A2', 8: 'AA2', 9: 'dd3', 10: 'd3', 11: 'm3', 12: 'M3', 13: 'A3', 14: 'AA3', 15: 'dd4', 16: 'd4', 17: 'P4', 18: 'A4', 19: 'AA4', 21: 'dd5', 22: 'd5', 23: 'P5', 24: 'A5', 25: 'AA5', 26: 'dd6', 27: 'd6', 28: 'm6', 29: 'M6', 30: 'A6', 31: 'AA6', 32: 'dd7', 33: 'd7', 34: 'm7', 35: 'M7', 36: 'A7', 37: 'AA7', 40: 'octave'} module-attribute

Base-40 interval classes (d=diminished, m=minor, M=major, P=perfect, A=augmented)

AbstractToken

Bases: ABC

An abstract base class representing a token.

This class serves as a blueprint for creating various types of tokens, which are categorized based on their TokenCategory.

Attributes:

Name Type Description
encoding str

The original representation of the token.

category TokenCategory

The category of the token.

hidden bool

A flag indicating whether the token is hidden. Defaults to False.

Source code in kernpy/core/tokens.py
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
class AbstractToken(ABC):
    """
    An abstract base class representing a token.

    This class serves as a blueprint for creating various types of tokens, which are
    categorized based on their TokenCategory.

    Attributes:
        encoding (str): The original representation of the token.
        category (TokenCategory): The category of the token.
        hidden (bool): A flag indicating whether the token is hidden. Defaults to False.
    """

    def __init__(self, encoding: str, category: TokenCategory):
        """
        AbstractToken constructor

        Args:
            encoding (str): The original representation of the token.
            category (TokenCategory): The category of the token.
        """
        self.encoding = encoding
        self.category = category
        self.hidden = False

    @abstractmethod
    def export(self, **kwargs) -> str:
        """
        Exports the token.

        Keyword Arguments:
            filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
                indicating whether the token should be included in the export. If provided, only tokens for which the
                function returns True will be exported. Defaults to None. If None, all tokens will be exported.

        Returns:
            str: The encoded token representation, potentially filtered if a filter_categories function is provided.

        Examples:
            >>> token = AbstractToken('*clefF4', TokenCategory.SIGNATURES)
            >>> token.export()
            '*clefF4'
            >>> token.export(filter_categories=lambda cat: cat in {TokenCategory.SIGNATURES, TokenCategory.SIGNATURES.DURATION})
            '*clefF4'
        """
        pass


    def __str__(self):
        """
        Returns the string representation of the token.

        Returns (str): The string representation of the token without processing.
        """
        return self.export()

    def __eq__(self, other):
        """
        Compare two tokens.

        Args:
            other (AbstractToken): The other token to compare.
        Returns (bool): True if the tokens are equal, False otherwise.
        """
        if not isinstance(other, AbstractToken):
            return False
        return self.encoding == other.encoding and self.category == other.category

    def __ne__(self, other):
        """
        Compare two tokens.

        Args:
            other (AbstractToken): The other token to compare.
        Returns (bool): True if the tokens are different, False otherwise.
        """
        return not self.__eq__(other)

    def __hash__(self):
        """
        Returns the hash of the token.

        Returns (int): The hash of the token.
        """
        return hash((self.export(), self.category))

__eq__(other)

Compare two tokens.

Parameters:

Name Type Description Default
other AbstractToken

The other token to compare.

required

Returns (bool): True if the tokens are equal, False otherwise.

Source code in kernpy/core/tokens.py
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
def __eq__(self, other):
    """
    Compare two tokens.

    Args:
        other (AbstractToken): The other token to compare.
    Returns (bool): True if the tokens are equal, False otherwise.
    """
    if not isinstance(other, AbstractToken):
        return False
    return self.encoding == other.encoding and self.category == other.category

__hash__()

Returns the hash of the token.

Returns (int): The hash of the token.

Source code in kernpy/core/tokens.py
1462
1463
1464
1465
1466
1467
1468
def __hash__(self):
    """
    Returns the hash of the token.

    Returns (int): The hash of the token.
    """
    return hash((self.export(), self.category))

__init__(encoding, category)

AbstractToken constructor

Parameters:

Name Type Description Default
encoding str

The original representation of the token.

required
category TokenCategory

The category of the token.

required
Source code in kernpy/core/tokens.py
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
def __init__(self, encoding: str, category: TokenCategory):
    """
    AbstractToken constructor

    Args:
        encoding (str): The original representation of the token.
        category (TokenCategory): The category of the token.
    """
    self.encoding = encoding
    self.category = category
    self.hidden = False

__ne__(other)

Compare two tokens.

Parameters:

Name Type Description Default
other AbstractToken

The other token to compare.

required

Returns (bool): True if the tokens are different, False otherwise.

Source code in kernpy/core/tokens.py
1452
1453
1454
1455
1456
1457
1458
1459
1460
def __ne__(self, other):
    """
    Compare two tokens.

    Args:
        other (AbstractToken): The other token to compare.
    Returns (bool): True if the tokens are different, False otherwise.
    """
    return not self.__eq__(other)

__str__()

Returns the string representation of the token.

Returns (str): The string representation of the token without processing.

Source code in kernpy/core/tokens.py
1432
1433
1434
1435
1436
1437
1438
def __str__(self):
    """
    Returns the string representation of the token.

    Returns (str): The string representation of the token without processing.
    """
    return self.export()

export(**kwargs) abstractmethod

Exports the token.

Other Parameters:

Name Type Description
filter_categories Optional[Callable[[TokenCategory], bool]]

A function that takes a TokenCategory and returns a boolean indicating whether the token should be included in the export. If provided, only tokens for which the function returns True will be exported. Defaults to None. If None, all tokens will be exported.

Returns:

Name Type Description
str str

The encoded token representation, potentially filtered if a filter_categories function is provided.

Examples:

>>> token = AbstractToken('*clefF4', TokenCategory.SIGNATURES)
>>> token.export()
'*clefF4'
>>> token.export(filter_categories=lambda cat: cat in {TokenCategory.SIGNATURES, TokenCategory.SIGNATURES.DURATION})
'*clefF4'
Source code in kernpy/core/tokens.py
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
@abstractmethod
def export(self, **kwargs) -> str:
    """
    Exports the token.

    Keyword Arguments:
        filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
            indicating whether the token should be included in the export. If provided, only tokens for which the
            function returns True will be exported. Defaults to None. If None, all tokens will be exported.

    Returns:
        str: The encoded token representation, potentially filtered if a filter_categories function is provided.

    Examples:
        >>> token = AbstractToken('*clefF4', TokenCategory.SIGNATURES)
        >>> token.export()
        '*clefF4'
        >>> token.export(filter_categories=lambda cat: cat in {TokenCategory.SIGNATURES, TokenCategory.SIGNATURES.DURATION})
        '*clefF4'
    """
    pass

AgnosticPitch

Represents a pitch in a generic way, independent of the notation system used.

Source code in kernpy/core/pitch_models.py
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
class AgnosticPitch:
    """
    Represents a pitch in a generic way, independent of the notation system used.
    """

    ASCENDANT_ACCIDENTAL_ALTERATION = '+'
    DESCENDENT_ACCIDENTAL_ALTERATION = '-'
    ACCIDENTAL_ALTERATIONS = {
        ASCENDANT_ACCIDENTAL_ALTERATION,
        DESCENDENT_ACCIDENTAL_ALTERATION
    }


    def __init__(self, name: str, octave: int):
        """
        Initialize the AgnosticPitch object.

        Args:
            name (str): The name of the pitch (e.g., 'C', 'D#', 'Bb').
            octave (int): The octave of the pitch (e.g., 4 for middle C).
        """
        self.name = name
        self.octave = octave

    @property
    def name(self):
        return self.__name

    @name.setter
    def name(self, name):
        accidentals = ''.join([c for c in name if c in ['-', '+']])
        name = name.upper()
        name = name.replace('#', '+').replace('b', '-')

        check_name = name.replace('+', '').replace('-', '')
        if check_name not in pitches:
            raise ValueError(f"Invalid pitch: {name}")
        if len(accidentals) > 3:
            raise ValueError(f"Invalid pitch: {name}. Maximum of 3 accidentals allowed. ")
        self.__name = name

    @property
    def octave(self):
        return self.__octave

    @octave.setter
    def octave(self, octave):
        if not isinstance(octave, int):
            raise ValueError(f"Invalid octave: {octave}")
        self.__octave = octave

    def get_chroma(self):
        return 40 * self.octave + Chromas[self.name]

    @classmethod
    def to_transposed(cls, agnostic_pitch: 'AgnosticPitch', raw_interval, direction: str = Direction.UP.value) -> 'AgnosticPitch':
        delta = raw_interval if direction == Direction.UP.value else - raw_interval
        chroma = agnostic_pitch.get_chroma() + delta
        name = ChromasByValue[chroma % 40]
        octave = chroma // 40
        return AgnosticPitch(name, octave)

    @classmethod
    def get_chroma_from_interval(cls, pitch_a: 'AgnosticPitch', pitch_b: 'AgnosticPitch'):
        return pitch_b.get_chroma() - pitch_a.get_chroma()

    def __str__(self):
        return f"<{self.name}, {self.octave}>"

    def __repr__(self):
        return f"{self.__name}(name={self.name}, octave={self.octave})"

    def __eq__(self, other):
        if not isinstance(other, AgnosticPitch):
            return False
        return self.name == other.name and self.octave == other.octave

    def __ne__(self, other):
        if not isinstance(other, AgnosticPitch):
            return True
        return self.name != other.name or self.octave != other.octave

    def __hash__(self):
        return hash((self.name, self.octave))

    def __lt__(self, other):
        if not isinstance(other, AgnosticPitch):
            return NotImplemented
        if self.octave == other.octave:
            return Chromas[self.name] < Chromas[other.name]
        return self.octave < other.octave

    def __gt__(self, other):
        if not isinstance(other, AgnosticPitch):
            return NotImplemented
        if self.octave == other.octave:
            return Chromas[self.name] > Chromas[other.name]
        return self.octave > other.octave

__init__(name, octave)

Initialize the AgnosticPitch object.

Parameters:

Name Type Description Default
name str

The name of the pitch (e.g., 'C', 'D#', 'Bb').

required
octave int

The octave of the pitch (e.g., 4 for middle C).

required
Source code in kernpy/core/pitch_models.py
85
86
87
88
89
90
91
92
93
94
def __init__(self, name: str, octave: int):
    """
    Initialize the AgnosticPitch object.

    Args:
        name (str): The name of the pitch (e.g., 'C', 'D#', 'Bb').
        octave (int): The octave of the pitch (e.g., 4 for middle C).
    """
    self.name = name
    self.octave = octave

BasicSpineImporter

Bases: SpineImporter

Source code in kernpy/core/basic_spine_importer.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
class BasicSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()  # TODO: Create a custom functional listener for BasicSpineImporter

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.OTHER)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.BARLINES,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if not any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.OTHER)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/basic_spine_importer.py
12
13
14
15
16
17
18
19
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

BekernTokenizer

Bases: Tokenizer

BekernTokenizer converts a Token into a bekern (Basic Extended **kern) string representation. This format use a '@' separator for the main tokens but discards all the decorations tokens.

Source code in kernpy/core/tokenizers.py
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
class BekernTokenizer(Tokenizer):
    """
    BekernTokenizer converts a Token into a bekern (Basic Extended **kern) string representation. This format use a '@' separator for the \
    main tokens but discards all the decorations tokens.
    """

    def __init__(self, *, token_categories: Set['TokenCategory']):
        """
        Create a new BekernTokenizer

        Args:
            token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
        """
        super().__init__(token_categories=token_categories)

    def tokenize(self, token: Token) -> str:
        """
        Tokenize a token into a bekern string representation.
        Args:
            token (Token): Token to be tokenized.

        Returns (str): bekern string representation.

        Examples:
            >>> token.encoding
            '2@.@bb@-·_·L'
            >>> BekernTokenizer().tokenize(token)
            '2@.@bb@-'
        """
        ekern_content = token.export(filter_categories=lambda cat: cat in self.token_categories)

        if DECORATION_SEPARATOR not in ekern_content:
            return ekern_content

        reduced_content = ekern_content.split(DECORATION_SEPARATOR)[0]
        if reduced_content.endswith(TOKEN_SEPARATOR):
            reduced_content = reduced_content[:-1]

        return reduced_content

__init__(*, token_categories)

Create a new BekernTokenizer

Parameters:

Name Type Description Default
token_categories Set[TokenCategory]

List of categories to be tokenized. If None will raise an exception.

required
Source code in kernpy/core/tokenizers.py
153
154
155
156
157
158
159
160
def __init__(self, *, token_categories: Set['TokenCategory']):
    """
    Create a new BekernTokenizer

    Args:
        token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
    """
    super().__init__(token_categories=token_categories)

tokenize(token)

Tokenize a token into a bekern string representation. Args: token (Token): Token to be tokenized.

Returns (str): bekern string representation.

Examples:

>>> token.encoding
'2@.@bb@-·_·L'
>>> BekernTokenizer().tokenize(token)
'2@.@bb@-'
Source code in kernpy/core/tokenizers.py
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
def tokenize(self, token: Token) -> str:
    """
    Tokenize a token into a bekern string representation.
    Args:
        token (Token): Token to be tokenized.

    Returns (str): bekern string representation.

    Examples:
        >>> token.encoding
        '2@.@bb@-·_·L'
        >>> BekernTokenizer().tokenize(token)
        '2@.@bb@-'
    """
    ekern_content = token.export(filter_categories=lambda cat: cat in self.token_categories)

    if DECORATION_SEPARATOR not in ekern_content:
        return ekern_content

    reduced_content = ekern_content.split(DECORATION_SEPARATOR)[0]
    if reduced_content.endswith(TOKEN_SEPARATOR):
        reduced_content = reduced_content[:-1]

    return reduced_content

BkernTokenizer

Bases: Tokenizer

BkernTokenizer converts a Token into a bkern (Basic kern) string representation. This format use the main tokens but not the decorations tokens. This format is a lightweight version of the classic Humdrum kern format.

Source code in kernpy/core/tokenizers.py
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
class BkernTokenizer(Tokenizer):
    """
    BkernTokenizer converts a Token into a bkern (Basic **kern) string representation. This format use \
    the main tokens but not the decorations tokens. This format is a lightweight version of the classic
    Humdrum **kern format.
    """

    def __init__(self, *, token_categories: Set['TokenCategory']):
        """
        Create a new BkernTokenizer

        Args:
            token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
        """
        super().__init__(token_categories=token_categories)


    def tokenize(self, token: Token) -> str:
        """
        Tokenize a token into a bkern string representation.
        Args:
            token (Token): Token to be tokenized.

        Returns (str): bkern string representation.

        Examples:
            >>> token.encoding
            '2@.@bb@-·_·L'
            >>> BkernTokenizer().tokenize(token)
            '2.bb-'
        """
        return BekernTokenizer(token_categories=self.token_categories).tokenize(token).replace(TOKEN_SEPARATOR, '')

__init__(*, token_categories)

Create a new BkernTokenizer

Parameters:

Name Type Description Default
token_categories Set[TokenCategory]

List of categories to be tokenized. If None will raise an exception.

required
Source code in kernpy/core/tokenizers.py
195
196
197
198
199
200
201
202
def __init__(self, *, token_categories: Set['TokenCategory']):
    """
    Create a new BkernTokenizer

    Args:
        token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
    """
    super().__init__(token_categories=token_categories)

tokenize(token)

Tokenize a token into a bkern string representation. Args: token (Token): Token to be tokenized.

Returns (str): bkern string representation.

Examples:

>>> token.encoding
'2@.@bb@-·_·L'
>>> BkernTokenizer().tokenize(token)
'2.bb-'
Source code in kernpy/core/tokenizers.py
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
def tokenize(self, token: Token) -> str:
    """
    Tokenize a token into a bkern string representation.
    Args:
        token (Token): Token to be tokenized.

    Returns (str): bkern string representation.

    Examples:
        >>> token.encoding
        '2@.@bb@-·_·L'
        >>> BkernTokenizer().tokenize(token)
        '2.bb-'
    """
    return BekernTokenizer(token_categories=self.token_categories).tokenize(token).replace(TOKEN_SEPARATOR, '')

C1Clef

Bases: Clef

Source code in kernpy/core/gkern.py
391
392
393
394
395
396
397
398
399
400
401
402
class C1Clef(Clef):
    def __init__(self):
        """
        Initializes the C Clef object.
        """
        super().__init__(DiatonicPitch('C'), 1)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('C', 3)

__init__()

Initializes the C Clef object.

Source code in kernpy/core/gkern.py
392
393
394
395
396
def __init__(self):
    """
    Initializes the C Clef object.
    """
    super().__init__(DiatonicPitch('C'), 1)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
398
399
400
401
402
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('C', 3)

C2Clef

Bases: Clef

Source code in kernpy/core/gkern.py
404
405
406
407
408
409
410
411
412
413
414
415
class C2Clef(Clef):
    def __init__(self):
        """
        Initializes the C Clef object.
        """
        super().__init__(DiatonicPitch('A'), 2)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('A', 2)

__init__()

Initializes the C Clef object.

Source code in kernpy/core/gkern.py
405
406
407
408
409
def __init__(self):
    """
    Initializes the C Clef object.
    """
    super().__init__(DiatonicPitch('A'), 2)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
411
412
413
414
415
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('A', 2)

C3Clef

Bases: Clef

Source code in kernpy/core/gkern.py
418
419
420
421
422
423
424
425
426
427
428
429
class C3Clef(Clef):
    def __init__(self):
        """
        Initializes the C Clef object.
        """
        super().__init__(DiatonicPitch('C'), 3)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('B', 2)

__init__()

Initializes the C Clef object.

Source code in kernpy/core/gkern.py
419
420
421
422
423
def __init__(self):
    """
    Initializes the C Clef object.
    """
    super().__init__(DiatonicPitch('C'), 3)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
425
426
427
428
429
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('B', 2)

C4Clef

Bases: Clef

Source code in kernpy/core/gkern.py
431
432
433
434
435
436
437
438
439
440
441
442
class C4Clef(Clef):
    def __init__(self):
        """
        Initializes the C Clef object.
        """
        super().__init__(DiatonicPitch('C'), 4)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('D', 2)

__init__()

Initializes the C Clef object.

Source code in kernpy/core/gkern.py
432
433
434
435
436
def __init__(self):
    """
    Initializes the C Clef object.
    """
    super().__init__(DiatonicPitch('C'), 4)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
438
439
440
441
442
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('D', 2)

Clef

Bases: ABC

Abstract class representing a clef.

Source code in kernpy/core/gkern.py
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
class Clef(ABC):
    """
    Abstract class representing a clef.
    """

    def __init__(self, diatonic_pitch: DiatonicPitch, on_line: int):
        """
        Initializes the Clef object.
        Args:
            diatonic_pitch (DiatonicPitch): The diatonic pitch of the clef (e.g., 'C', 'G', 'F'). This value is used as a decorator.
            on_line (int): The line number on which the clef is placed (1 for bottom line, 2 for 1st line from bottom, etc.). This value is used as a decorator.
        """
        self.diatonic_pitch = diatonic_pitch
        self.on_line = on_line

    @abstractmethod
    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        ...

    def name(self):
        """
        Returns the name of the clef.
        """
        return f"{self.diatonic_pitch} on line {self.on_line}"

    def reference_point(self) -> PitchPositionReferenceSystem:
        """
        Returns the reference point for the clef.
        """
        return PitchPositionReferenceSystem(self.bottom_line())

    def __str__(self) -> str:
        """
        Returns:
            str: The string representation of the clef.
        """
        return f'{self.diatonic_pitch.encoding.upper()} on the {self.on_line}{self._ordinal_suffix(self.on_line)} line'

    @staticmethod
    def _ordinal_suffix(number: int) -> str:
        """
        Returns the ordinal suffix for a given integer (e.g. 'st', 'nd', 'rd', 'th').

        Args:
            number (int): The number to get the suffix for.

        Returns:
            str: The ordinal suffix.
        """
        # 11, 12, 13 always take “th”
        if 11 <= (number % 100) <= 13:
            return 'th'
        # otherwise use last digit
        last = number % 10
        if last == 1:
            return 'st'
        elif last == 2:
            return 'nd'
        elif last == 3:
            return 'rd'
        else:
            return 'th'

__init__(diatonic_pitch, on_line)

Initializes the Clef object. Args: diatonic_pitch (DiatonicPitch): The diatonic pitch of the clef (e.g., 'C', 'G', 'F'). This value is used as a decorator. on_line (int): The line number on which the clef is placed (1 for bottom line, 2 for 1st line from bottom, etc.). This value is used as a decorator.

Source code in kernpy/core/gkern.py
290
291
292
293
294
295
296
297
298
def __init__(self, diatonic_pitch: DiatonicPitch, on_line: int):
    """
    Initializes the Clef object.
    Args:
        diatonic_pitch (DiatonicPitch): The diatonic pitch of the clef (e.g., 'C', 'G', 'F'). This value is used as a decorator.
        on_line (int): The line number on which the clef is placed (1 for bottom line, 2 for 1st line from bottom, etc.). This value is used as a decorator.
    """
    self.diatonic_pitch = diatonic_pitch
    self.on_line = on_line

__str__()

Returns:

Name Type Description
str str

The string representation of the clef.

Source code in kernpy/core/gkern.py
319
320
321
322
323
324
def __str__(self) -> str:
    """
    Returns:
        str: The string representation of the clef.
    """
    return f'{self.diatonic_pitch.encoding.upper()} on the {self.on_line}{self._ordinal_suffix(self.on_line)} line'

bottom_line() abstractmethod

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
300
301
302
303
304
305
@abstractmethod
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    ...

name()

Returns the name of the clef.

Source code in kernpy/core/gkern.py
307
308
309
310
311
def name(self):
    """
    Returns the name of the clef.
    """
    return f"{self.diatonic_pitch} on line {self.on_line}"

reference_point()

Returns the reference point for the clef.

Source code in kernpy/core/gkern.py
313
314
315
316
317
def reference_point(self) -> PitchPositionReferenceSystem:
    """
    Returns the reference point for the clef.
    """
    return PitchPositionReferenceSystem(self.bottom_line())

ClefFactory

Source code in kernpy/core/gkern.py
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
class ClefFactory:
    CLEF_NAMES = { 'G', 'F', 'C' }
    @classmethod
    def create_clef(cls, encoding: str) -> Clef:
        """
        Creates a Clef object based on the given token.

        Clefs are encoded in interpretation tokens that start with a single * followed by the string clef and then the shape and line position of the clef. For example, a treble clef is *clefG2, with G meaning a G-clef, and 2 meaning that the clef is centered on the second line up from the bottom of the staff. The bass clef is *clefF4 since it is an F-clef on the fourth line of the staff.
        A vocal tenor clef is represented by *clefGv2, where the v means the music should be played an octave lower than the regular clef’s sounding pitches. Try creating a vocal tenor clef in the above interactive example. The v operator also works on the other clefs (but these sorts of clefs are very rare). Another rare clef is *clefG^2 which is the opposite of *clefGv2, where the music is written an octave lower than actually sounding pitch for the normal form of the clef. You can also try to create exotic two-octave clefs by doubling the ^^ and vv markers.

        Args:
            encoding (str): The encoding of the clef token.

        Returns:

        """
        encoding = encoding.replace('*clef', '')

        # at this point the encoding is like G2, F4,... or Gv2, F^4,... or G^^2, Fvv4,... or G^^...^^2, Fvvv4,...
        name = list(filter(lambda x: x in cls.CLEF_NAMES, encoding))[0]
        line = int(list(filter(lambda x: x.isdigit(), encoding))[0])
        decorators = ''.join(filter(lambda x: x in ['^', 'v'], encoding))

        if name not in cls.CLEF_NAMES:
            raise ValueError(f"Invalid clef name: {name}. Expected one of {cls.CLEF_NAMES}.")

        if name == 'G':
            return GClef()
        elif name == 'F':
            if line == 3:
                return F3Clef()
            elif line == 4:
                return F4Clef()
            else:
                raise ValueError(f"Invalid F clef line: {line}. Expected 3 or 4.")
        elif name == 'C':
            if line == 1:
                return C1Clef()
            elif line == 2:
                return C2Clef()
            elif line == 3:
                return C3Clef()
            elif line == 4:
                return C4Clef()
            else:
                raise ValueError(f"Invalid C clef line: {line}. Expected 1, 2, 3 or 4.")
        else:
            raise ValueError(f"Invalid clef name: {name}. Expected one of {cls.CLEF_NAMES}.")

create_clef(encoding) classmethod

Creates a Clef object based on the given token.

Clefs are encoded in interpretation tokens that start with a single * followed by the string clef and then the shape and line position of the clef. For example, a treble clef is clefG2, with G meaning a G-clef, and 2 meaning that the clef is centered on the second line up from the bottom of the staff. The bass clef is clefF4 since it is an F-clef on the fourth line of the staff. A vocal tenor clef is represented by clefGv2, where the v means the music should be played an octave lower than the regular clef’s sounding pitches. Try creating a vocal tenor clef in the above interactive example. The v operator also works on the other clefs (but these sorts of clefs are very rare). Another rare clef is clefG^2 which is the opposite of *clefGv2, where the music is written an octave lower than actually sounding pitch for the normal form of the clef. You can also try to create exotic two-octave clefs by doubling the ^^ and vv markers.

Parameters:

Name Type Description Default
encoding str

The encoding of the clef token.

required

Returns:

Source code in kernpy/core/gkern.py
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
@classmethod
def create_clef(cls, encoding: str) -> Clef:
    """
    Creates a Clef object based on the given token.

    Clefs are encoded in interpretation tokens that start with a single * followed by the string clef and then the shape and line position of the clef. For example, a treble clef is *clefG2, with G meaning a G-clef, and 2 meaning that the clef is centered on the second line up from the bottom of the staff. The bass clef is *clefF4 since it is an F-clef on the fourth line of the staff.
    A vocal tenor clef is represented by *clefGv2, where the v means the music should be played an octave lower than the regular clef’s sounding pitches. Try creating a vocal tenor clef in the above interactive example. The v operator also works on the other clefs (but these sorts of clefs are very rare). Another rare clef is *clefG^2 which is the opposite of *clefGv2, where the music is written an octave lower than actually sounding pitch for the normal form of the clef. You can also try to create exotic two-octave clefs by doubling the ^^ and vv markers.

    Args:
        encoding (str): The encoding of the clef token.

    Returns:

    """
    encoding = encoding.replace('*clef', '')

    # at this point the encoding is like G2, F4,... or Gv2, F^4,... or G^^2, Fvv4,... or G^^...^^2, Fvvv4,...
    name = list(filter(lambda x: x in cls.CLEF_NAMES, encoding))[0]
    line = int(list(filter(lambda x: x.isdigit(), encoding))[0])
    decorators = ''.join(filter(lambda x: x in ['^', 'v'], encoding))

    if name not in cls.CLEF_NAMES:
        raise ValueError(f"Invalid clef name: {name}. Expected one of {cls.CLEF_NAMES}.")

    if name == 'G':
        return GClef()
    elif name == 'F':
        if line == 3:
            return F3Clef()
        elif line == 4:
            return F4Clef()
        else:
            raise ValueError(f"Invalid F clef line: {line}. Expected 3 or 4.")
    elif name == 'C':
        if line == 1:
            return C1Clef()
        elif line == 2:
            return C2Clef()
        elif line == 3:
            return C3Clef()
        elif line == 4:
            return C4Clef()
        else:
            raise ValueError(f"Invalid C clef line: {line}. Expected 1, 2, 3 or 4.")
    else:
        raise ValueError(f"Invalid clef name: {name}. Expected one of {cls.CLEF_NAMES}.")

ComplexToken

Bases: Token, ABC

Abstract ComplexToken class. This abstract class ensures that the subclasses implement the export method using the 'filter_categories' parameter to filter the subtokens.

Passing the argument 'filter_categories' by **kwargs don't break the compatibility with parent classes.

Here we're trying to get the Liskov substitution principle done...

Source code in kernpy/core/tokens.py
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
class ComplexToken(Token, ABC):
    """
    Abstract ComplexToken class. This abstract class ensures that the subclasses implement the export method using\
     the 'filter_categories' parameter to filter the subtokens.

     Passing the argument 'filter_categories' by **kwargs don't break the compatibility with parent classes.

     Here we're trying to get the Liskov substitution principle done...
    """
    def __init__(self, encoding: str, category: TokenCategory):
        """
        Constructor for the ComplexToken

        Args:
            encoding (str): The original representation of the token.
            category (TokenCategory) : The category of the token.
        """
        super().__init__(encoding, category)

    @abstractmethod
    def export(self, **kwargs) -> str:
        """
        Exports the token.

        Keyword Arguments:
            filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
                indicating whether the token should be included in the export. If provided, only tokens for which the
                function returns True will be exported. Defaults to None. If None, all tokens will be exported.

        Returns (str): The exported token.
        """
        pass

__init__(encoding, category)

Constructor for the ComplexToken

Parameters:

Name Type Description Default
encoding str

The original representation of the token.

required
category TokenCategory)

The category of the token.

required
Source code in kernpy/core/tokens.py
1717
1718
1719
1720
1721
1722
1723
1724
1725
def __init__(self, encoding: str, category: TokenCategory):
    """
    Constructor for the ComplexToken

    Args:
        encoding (str): The original representation of the token.
        category (TokenCategory) : The category of the token.
    """
    super().__init__(encoding, category)

export(**kwargs) abstractmethod

Exports the token.

Other Parameters:

Name Type Description
filter_categories Optional[Callable[[TokenCategory], bool]]

A function that takes a TokenCategory and returns a boolean indicating whether the token should be included in the export. If provided, only tokens for which the function returns True will be exported. Defaults to None. If None, all tokens will be exported.

Returns (str): The exported token.

Source code in kernpy/core/tokens.py
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
@abstractmethod
def export(self, **kwargs) -> str:
    """
    Exports the token.

    Keyword Arguments:
        filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
            indicating whether the token should be included in the export. If provided, only tokens for which the
            function returns True will be exported. Defaults to None. If None, all tokens will be exported.

    Returns (str): The exported token.
    """
    pass

CompoundToken

Bases: ComplexToken

Source code in kernpy/core/tokens.py
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
class CompoundToken(ComplexToken):
    def __init__(self, encoding: str, category: TokenCategory, subtokens: List[Subtoken]):
        """
        Args:
            encoding (str): The complete unprocessed encoding
            category (TokenCategory): The token category, one of 'TokenCategory'
            subtokens (List[Subtoken]): The individual elements of the token. Also of type 'TokenCategory' but \
                in the hierarchy they must be children of the current token.
        """
        super().__init__(encoding, category)

        for subtoken in subtokens:
            if not isinstance(subtoken, Subtoken):
                raise ValueError(f'All subtokens must be instances of Subtoken. Found {type(subtoken)}')

        self.subtokens = subtokens

    def export(self, **kwargs) -> str:
        """
        Exports the token.

        Keyword Arguments:
            filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
                indicating whether the token should be included in the export. If provided, only tokens for which the
                function returns True will be exported. Defaults to None. If None, all tokens will be exported.

        Returns (str): The exported token.
        """
        filter_categories_fn = kwargs.get('filter_categories', None)
        parts = []
        for subtoken in self.subtokens:
            # Only export the subtoken if it passes the filter_categories (if provided)
            if filter_categories_fn is None or filter_categories_fn(subtoken.category):
                # parts.append(subtoken.export(**kwargs)) in the future when SubTokens will be Tokens
                parts.append(subtoken.encoding)
        return TOKEN_SEPARATOR.join(parts) if len(parts) > 0 else EMPTY_TOKEN

__init__(encoding, category, subtokens)

Parameters:

Name Type Description Default
encoding str

The complete unprocessed encoding

required
category TokenCategory

The token category, one of 'TokenCategory'

required
subtokens List[Subtoken]

The individual elements of the token. Also of type 'TokenCategory' but in the hierarchy they must be children of the current token.

required
Source code in kernpy/core/tokens.py
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
def __init__(self, encoding: str, category: TokenCategory, subtokens: List[Subtoken]):
    """
    Args:
        encoding (str): The complete unprocessed encoding
        category (TokenCategory): The token category, one of 'TokenCategory'
        subtokens (List[Subtoken]): The individual elements of the token. Also of type 'TokenCategory' but \
            in the hierarchy they must be children of the current token.
    """
    super().__init__(encoding, category)

    for subtoken in subtokens:
        if not isinstance(subtoken, Subtoken):
            raise ValueError(f'All subtokens must be instances of Subtoken. Found {type(subtoken)}')

    self.subtokens = subtokens

export(**kwargs)

Exports the token.

Other Parameters:

Name Type Description
filter_categories Optional[Callable[[TokenCategory], bool]]

A function that takes a TokenCategory and returns a boolean indicating whether the token should be included in the export. If provided, only tokens for which the function returns True will be exported. Defaults to None. If None, all tokens will be exported.

Returns (str): The exported token.

Source code in kernpy/core/tokens.py
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
def export(self, **kwargs) -> str:
    """
    Exports the token.

    Keyword Arguments:
        filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
            indicating whether the token should be included in the export. If provided, only tokens for which the
            function returns True will be exported. Defaults to None. If None, all tokens will be exported.

    Returns (str): The exported token.
    """
    filter_categories_fn = kwargs.get('filter_categories', None)
    parts = []
    for subtoken in self.subtokens:
        # Only export the subtoken if it passes the filter_categories (if provided)
        if filter_categories_fn is None or filter_categories_fn(subtoken.category):
            # parts.append(subtoken.export(**kwargs)) in the future when SubTokens will be Tokens
            parts.append(subtoken.encoding)
    return TOKEN_SEPARATOR.join(parts) if len(parts) > 0 else EMPTY_TOKEN

Document

Document class.

This class store the score content using an agnostic tree structure.

Attributes:

Name Type Description
tree MultistageTree

The tree structure of the document where all the nodes are stored. Each stage of the tree corresponds to a row in the Humdrum **kern file encoding.

measure_start_tree_stages List[List[Node]]

The list of nodes that corresponds to the measures. Empty list by default. The index of the list is starting from 1. Rows after removing empty lines and line comments

page_bounding_boxes Dict[int, BoundingBoxMeasures]

The dictionary of page bounding boxes. - key: page number - value: BoundingBoxMeasures object

header_stage int

The index of the stage that contains the headers. None by default.

Source code in kernpy/core/document.py
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
class Document:
    """
    Document class.

    This class store the score content using an agnostic tree structure.

    Attributes:
        tree (MultistageTree): The tree structure of the document where all the nodes are stored. \
            Each stage of the tree corresponds to a row in the Humdrum **kern file encoding.
        measure_start_tree_stages (List[List[Node]]): The list of nodes that corresponds to the measures. \
            Empty list by default.
            The index of the list is starting from 1. Rows after removing empty lines and line comments
        page_bounding_boxes (Dict[int, BoundingBoxMeasures]): The dictionary of page bounding boxes. \
            - key: page number
            - value: BoundingBoxMeasures object
        header_stage (int): The index of the stage that contains the headers. None by default.
    """

    def __init__(self, tree: MultistageTree):
        """
        Constructor for Document class.

        Args:
            tree (MultistageTree): The tree structure of the document where all the nodes are stored.
        """
        self.tree = tree  # TODO: ? Should we use copy.deepcopy() here?
        self.measure_start_tree_stages = []
        self.page_bounding_boxes = {}
        self.header_stage = None

    FIRST_MEASURE = 1

    def get_header_stage(self) -> Union[List[Node], List[List[Node]]]:
        """
        Get the Node list of the header stage.

        Returns: (Union[List[Node], List[List[Node]]]) The Node list of the header stage.

        Raises: Exception - If the document has no header stage.
        """
        if self.header_stage:
            return self.tree.stages[self.header_stage]
        else:
            raise Exception('No header stage found')

    def get_leaves(self) -> List[Node]:
        """
        Get the leaves of the tree.

        Returns: (List[Node]) The leaves of the tree.
        """
        return self.tree.stages[len(self.tree.stages) - 1]

    def get_spine_count(self) -> int:
        """
        Get the number of spines in the document.

        Returns (int): The number of spines in the document.
        """
        return len(self.get_header_stage())  # TODO: test refactor

    def get_first_measure(self) -> int:
        """
        Get the index of the first measure of the document.

        Returns: (Int) The index of the first measure of the document.

        Raises: Exception - If the document has no measures.

        Examples:
            >>> import kernpy as kp
            >>> document, err = kp.read('score.krn')
            >>> document.get_first_measure()
            1
        """
        if len(self.measure_start_tree_stages) == 0:
            raise Exception('No measures found')

        return self.FIRST_MEASURE

    def measures_count(self) -> int:
        """
        Get the index of the last measure of the document.

        Returns: (Int) The index of the last measure of the document.

        Raises: Exception - If the document has no measures.

        Examples:
            >>> document, _ = kernpy.read('score.krn')
            >>> document.measures_count()
            10
            >>> for i in range(document.get_first_measure(), document.measures_count() + 1):
            >>>   options = kernpy.ExportOptions(from_measure=i, to_measure=i+4)
        """
        if len(self.measure_start_tree_stages) == 0:
            raise Exception('No measures found')

        return len(self.measure_start_tree_stages)

    def get_metacomments(self, KeyComment: Optional[str] = None, clear: bool = False) -> List[str]:
        """
        Get all metacomments in the document

        Args:
            KeyComment: Filter by a specific metacomment key: e.g. Use 'COM' to get only comments starting with\
                '!!!COM: '. If None, all metacomments are returned.
            clear: If True, the metacomment key is removed from the comment. E.g. '!!!COM: Coltrane' -> 'Coltrane'.\
                If False, the metacomment key is kept. E.g. '!!!COM: Coltrane' -> '!!!COM: Coltrane'. \
                The clear functionality is equivalent to the following code:
                ```python
                comment = '!!!COM: Coltrane'
                clean_comment = comment.replace(f"!!!{KeyComment}: ", "")
                ```
                Other formats are not supported.

        Returns: A list of metacomments.

        Examples:
            >>> document.get_metacomments()
            ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
            >>> document.get_metacomments(KeyComment='COM')
            ['!!!COM: Coltrane']
            >>> document.get_metacomments(KeyComment='COM', clear=True)
            ['Coltrane']
            >>> document.get_metacomments(KeyComment='non_existing_key')
            []
        """
        traversal = MetacommentsTraversal()
        self.tree.dfs_iterative(traversal)
        result = []
        for metacomment in traversal.metacomments:
            if KeyComment is None or metacomment.encoding.startswith(f"!!!{KeyComment}"):
                new_comment = metacomment.encoding
                if clear:
                    new_comment = metacomment.encoding.replace(f"!!!{KeyComment}: ", "")
                result.append(new_comment)

        return result

    @classmethod
    def tokens_to_encodings(cls, tokens: Sequence[AbstractToken]):
        """
        Get the encodings of a list of tokens.

        The method is equivalent to the following code:
            >>> tokens = kp.get_all_tokens()
            >>> [token.encoding for token in tokens if token.encoding is not None]

        Args:
            tokens (Sequence[AbstractToken]): list - A list of tokens.

        Returns: List[str] - A list of token encodings.

        Examples:
            >>> tokens = document.get_all_tokens()
            >>> Document.tokens_to_encodings(tokens)
            ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
        """
        encodings = [token.encoding for token in tokens if token.encoding is not None]
        return encodings

    def get_all_tokens(self, filter_by_categories: Optional[Sequence[TokenCategory]] = None) -> List[AbstractToken]:
        """
        Args:
            filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

        Returns:
            List[AbstractToken] - A list of all tokens.

        Examples:
            >>> tokens = document.get_all_tokens()
            >>> Document.tokens_to_encodings(tokens)
            >>> [type(t) for t in tokens]
            [<class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>]
        """
        computed_categories = TokenCategory.valid(include=filter_by_categories)
        traversal = TokensTraversal(False, computed_categories)
        self.tree.dfs_iterative(traversal)
        return traversal.tokens

    def get_all_tokens_encodings(
            self,
            filter_by_categories: Optional[Sequence[TokenCategory]] = None
    ) -> List[str]:
        """
        Args:
            filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.


        Returns:
            list[str] - A list of all token encodings.

        Examples:
            >>> tokens = document.get_all_tokens_encodings()
            >>> Document.tokens_to_encodings(tokens)
            ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
        """
        tokens = self.get_all_tokens(filter_by_categories)
        return Document.tokens_to_encodings(tokens)

    def get_unique_tokens(
            self,
            filter_by_categories: Optional[Sequence[TokenCategory]] = None
    ) -> List[AbstractToken]:
        """
        Get unique tokens.

        Args:
            filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

        Returns:
            List[AbstractToken] - A list of unique tokens.

        """
        computed_categories = TokenCategory.valid(include=filter_by_categories)
        traversal = TokensTraversal(True, computed_categories)
        self.tree.dfs_iterative(traversal)
        return traversal.tokens

    def get_unique_token_encodings(
            self,
            filter_by_categories: Optional[Sequence[TokenCategory]] = None
    ) -> List[str]:
        """
        Get unique token encodings.

        Args:
            filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

        Returns: List[str] - A list of unique token encodings.

        """
        tokens = self.get_unique_tokens(filter_by_categories)
        return Document.tokens_to_encodings(tokens)

    def get_voices(self, clean: bool = False):
        """
        Get the voices of the document.

        Args
            clean (bool): Remove the first '!' from the voice name.

        Returns: A list of voices.

        Examples:
            >>> document.get_voices()
            ['!sax', '!piano', '!bass']
            >>> document.get_voices(clean=True)
            ['sax', 'piano', 'bass']
            >>> document.get_voices(clean=False)
            ['!sax', '!piano', '!bass']
        """
        from kernpy.core import TokenCategory
        voices = self.get_all_tokens(filter_by_categories=[TokenCategory.INSTRUMENTS])

        if clean:
            voices = [voice[1:] for voice in voices]
        return voices

    def clone(self):
        """
        Create a deep copy of the Document instance.

        Returns: A new instance of Document with the tree copied.

        """
        result = Document(copy(self.tree))
        result.measure_start_tree_stages = copy(self.measure_start_tree_stages)
        result.page_bounding_boxes = copy(self.page_bounding_boxes)
        result.header_stage = copy(self.header_stage)

        return result

    def append_spines(self, spines) -> None:
        """
        Append the spines directly to current document tree.

        Args:
            spines(list): A list of spines to append.

        Returns: None

        Examples:
            >>> import kernpy as kp
            >>> doc, _ = kp.read('score.krn')
            >>> spines = [
            >>> '4e\t4f\t4g\t4a\n4b\t4c\t4d\t4e\n=\t=\t=\t=\n',
            >>> '4c\t4d\t4e\t4f\n4g\t4a\t4b\t4c\n=\t=\t=\t=\n',
           >>> ]
           >>> doc.append_spines(spines)
           None
        """
        raise NotImplementedError()
        if len(spines) != self.get_spine_count():
            raise Exception(f"Spines count mismatch: {len(spines)} != {self.get_spine_count()}")

        for spine in spines:
            return

    def add(self, other: 'Document', *, check_core_spines_only: Optional[bool] = False) -> 'Document':
        """
        Concatenate one document to the current document: Modify the current object!

        Args:
            other: The document to concatenate.
            check_core_spines_only: If True, only the core spines (**kern and **mens) are checked. If False, all spines are checked.

        Returns ('Document'): The current document (self) with the other document concatenated.
        """
        if not Document.match(self, other, check_core_spines_only=check_core_spines_only):
            raise Exception(f'Documents are not compatible for addition. '
                            f'Headers do not match with check_core_spines_only={check_core_spines_only}. '
                            f'self: {self.get_header_nodes()}, other: {other.get_header_nodes()}. ')

        current_header_nodes = self.get_header_stage()
        other_header_nodes = other.get_header_stage()

        current_leaf_nodes = self.get_leaves()
        flatten = lambda lst: [item for sublist in lst for item in sublist]
        other_first_level_children = [flatten(c.children) for c in other_header_nodes]  # avoid header stage

        for current_leaf, other_first_level_child in zip(current_leaf_nodes, other_first_level_children, strict=False):
            # Ignore extra spines from other document.
            # But if there are extra spines in the current document, it will raise an exception.
            if current_leaf.token.encoding == TERMINATOR:
                # remove the '*-' token from the current document
                current_leaf_index = current_leaf.parent.children.index(current_leaf)
                current_leaf.parent.children.pop(current_leaf_index)
                current_leaf.parent.children.insert(current_leaf_index, other_first_level_child)

            self.tree.add_node(
                stage=len(self.tree.stages) - 1,  # TODO: check offset 0, +1, -1 ????
                parent=current_leaf,
                token=other_first_level_child.token,
                last_spine_operator_node=other_first_level_child.last_spine_operator_node,
                previous_signature_nodes=other_first_level_child.last_signature_nodes,
                header_node=other_first_level_child.header_node
            )

        return self

    def get_header_nodes(self) -> List[HeaderToken]:
        """
        Get the header nodes of the current document.

        Returns: List[HeaderToken]: A list with the header nodes of the current document.
        """
        return [token for token in self.get_all_tokens(filter_by_categories=None) if isinstance(token, HeaderToken)]

    def get_spine_ids(self) -> List[int]:
        """
                Get the indexes of the current document.

                Returns List[int]: A list with the indexes of the current document.

                Examples:
                    >>> document.get_all_spine_indexes()
                    [0, 1, 2, 3, 4]
                """
        header_nodes = self.get_header_nodes()
        return [node.spine_id for node in header_nodes]

    def frequencies(self, token_categories: Optional[Sequence[TokenCategory]] = None) -> Dict:
        """
        Frequency of tokens in the document.


        Args:
            token_categories (Optional[Sequence[TokenCategory]]): If None, all tokens are considered.
        Returns (Dict):
            A dictionary with the category and the number of occurrences of each token.

        """
        tokens = self.get_all_tokens(filter_by_categories=token_categories)
        frequencies = {}
        for t in tokens:
            if t.encoding in frequencies:
                frequencies[t.encoding]['occurrences'] += 1
            else:
                frequencies[t.encoding] = {
                    'occurrences': 1,
                    'category': t.category.name,
                }

        return frequencies

    def split(self) -> List['Document']:
        """
        Split the current document into a list of documents, one for each **kern spine.
        Each resulting document will contain one **kern spine along with all non-kern spines.

        Returns:
            List['Document']: A list of documents, where each document contains one **kern spine
            and all non-kern spines from the original document.

        Examples:
            >>> document.split()
            [<Document: score.krn>, <Document: score.krn>, <Document: score.krn>]
        """
        raise NotImplementedError
        new_documents = []
        self_document_copy = deepcopy(self)
        kern_header_nodes = [node for node in self_document_copy.get_header_nodes() if node.encoding == '**kern']
        other_header_nodes = [node for node in self_document_copy.get_header_nodes() if node.encoding != '**kern']
        spine_ids = self_document_copy.get_spine_ids()

        for header_node in kern_header_nodes:
            if header_node.spine_id not in spine_ids:
                continue

            spine_ids.remove(header_node.spine_id)

            new_tree = deepcopy(self.tree)
            prev_node = new_tree.root
            while not isinstance(prev_node, HeaderToken):
                prev_node = prev_node.children[0]

            if not prev_node or not isinstance(prev_node, HeaderToken):
                raise Exception(f'Header node not found: {prev_node} in {header_node}')

            new_children = list(filter(lambda x: x.spine_id == header_node.spine_id, prev_node.children))
            new_tree.root = new_children

            new_document = Document(new_tree)

            new_documents.append(new_document)

        return new_documents

    @classmethod
    def to_concat(cls, first_doc: 'Document', second_doc: 'Document', deep_copy: bool = True) -> 'Document':
        """
        Concatenate two documents.

        Args:
            first_doc (Document): The first document.
            second_doc (Document: The second document.
            deep_copy (bool): If True, the documents are deep copied. If False, the documents are shallow copied.

        Returns: A new instance of Document with the documents concatenated.
        """
        first_doc = first_doc.clone() if deep_copy else first_doc
        second_doc = second_doc.clone() if deep_copy else second_doc
        first_doc.add(second_doc)

        return first_doc

    @classmethod
    def match(cls, a: 'Document', b: 'Document', *, check_core_spines_only: Optional[bool] = False) -> bool:
        """
        Match two documents. Two documents match if they have the same spine structure.

        Args:
            a (Document): The first document.
            b (Document): The second document.
            check_core_spines_only (Optional[bool]): If True, only the core spines (**kern and **mens) are checked. If False, all spines are checked.

        Returns: True if the documents match, False otherwise.

        Examples:

        """
        if check_core_spines_only:
            return [token.encoding for token in a.get_header_nodes() if token.encoding in CORE_HEADERS] \
                == [token.encoding for token in b.get_header_nodes() if token.encoding in CORE_HEADERS]
        else:
            return [token.encoding for token in a.get_header_nodes()] \
                == [token.encoding for token in b.get_header_nodes()]


    def to_transposed(self, interval: str, direction: str = Direction.UP.value) -> 'Document':
        """
        Create a new document with the transposed notes without modifying the original document.

        Args:
            interval (str): The name of the interval to transpose. It can be 'P4', 'P5', 'M2', etc. Check the \
             kp.AVAILABLE_INTERVALS for the available intervals.
            direction (str): The direction to transpose. It can be 'up' or 'down'.

        Returns:

        """
        if interval not in AVAILABLE_INTERVALS:
            raise ValueError(
                f"Interval {interval!r} is not available. "
                f"Available intervals are: {AVAILABLE_INTERVALS}"
            )

        if direction not in (Direction.UP.value, Direction.DOWN.value):
            raise ValueError(
                f"Direction {direction!r} is not available. "
                f"Available directions are: "
                f"{Direction.UP.value!r}, {Direction.DOWN.value!r}"
            )

        new_document = self.clone()

        # BFS through the tree
        root = new_document.tree.root
        queue = Queue()
        queue.put(root)

        while not queue.empty():
            node = queue.get()

            if isinstance(node.token, NoteRestToken):
                orig_token = node.token

                new_subtokens = []
                transposed_pitch_encoding = None

                # Transpose each pitch subtoken in the pitch–duration list
                for subtoken in orig_token.pitch_duration_subtokens:
                    if subtoken.category == TokenCategory.PITCH:
                        # transpose() returns a new pitch subtoken
                        tp = transpose(
                            input_encoding=subtoken.encoding,
                            interval=IntervalsByName[interval],
                            direction=direction,
                            input_format=NotationEncoding.HUMDRUM.value,
                            output_format=NotationEncoding.HUMDRUM.value,
                        )
                        new_subtokens.append(Subtoken(tp, subtoken.category))
                        transposed_pitch_encoding = tp
                    else:
                        # leave duration subtokens untouched
                        new_subtokens.append(Subtoken(subtoken.encoding, subtoken.category))

                # Replace the node’s token with a new NoteRestToken
                node.token = NoteRestToken(
                    encoding=transposed_pitch_encoding,
                    pitch_duration_subtokens=new_subtokens,
                    decoration_subtokens=orig_token.decoration_subtokens,
                )

            # enqueue children
            for child in node.children:
                queue.put(child)

        # Return the transposed clone
        return new_document


    def __iter__(self):
        """
        Get the indexes to export all the document.

        Returns: An iterator with the indexes to export the document.
        """
        return iter(range(self.get_first_measure(), self.measures_count() + 1))

    def __next__(self):
        """
        Get the next index to export the document.

        Returns: The next index to export the document.
        """
        return next(iter(range(self.get_first_measure(), self.measures_count() + 1)))

__init__(tree)

Constructor for Document class.

Parameters:

Name Type Description Default
tree MultistageTree

The tree structure of the document where all the nodes are stored.

required
Source code in kernpy/core/document.py
356
357
358
359
360
361
362
363
364
365
366
def __init__(self, tree: MultistageTree):
    """
    Constructor for Document class.

    Args:
        tree (MultistageTree): The tree structure of the document where all the nodes are stored.
    """
    self.tree = tree  # TODO: ? Should we use copy.deepcopy() here?
    self.measure_start_tree_stages = []
    self.page_bounding_boxes = {}
    self.header_stage = None

__iter__()

Get the indexes to export all the document.

Returns: An iterator with the indexes to export the document.

Source code in kernpy/core/document.py
882
883
884
885
886
887
888
def __iter__(self):
    """
    Get the indexes to export all the document.

    Returns: An iterator with the indexes to export the document.
    """
    return iter(range(self.get_first_measure(), self.measures_count() + 1))

__next__()

Get the next index to export the document.

Returns: The next index to export the document.

Source code in kernpy/core/document.py
890
891
892
893
894
895
896
def __next__(self):
    """
    Get the next index to export the document.

    Returns: The next index to export the document.
    """
    return next(iter(range(self.get_first_measure(), self.measures_count() + 1)))

add(other, *, check_core_spines_only=False)

Concatenate one document to the current document: Modify the current object!

Parameters:

Name Type Description Default
other 'Document'

The document to concatenate.

required
check_core_spines_only Optional[bool]

If True, only the core spines (kern and mens) are checked. If False, all spines are checked.

False

Returns ('Document'): The current document (self) with the other document concatenated.

Source code in kernpy/core/document.py
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
def add(self, other: 'Document', *, check_core_spines_only: Optional[bool] = False) -> 'Document':
    """
    Concatenate one document to the current document: Modify the current object!

    Args:
        other: The document to concatenate.
        check_core_spines_only: If True, only the core spines (**kern and **mens) are checked. If False, all spines are checked.

    Returns ('Document'): The current document (self) with the other document concatenated.
    """
    if not Document.match(self, other, check_core_spines_only=check_core_spines_only):
        raise Exception(f'Documents are not compatible for addition. '
                        f'Headers do not match with check_core_spines_only={check_core_spines_only}. '
                        f'self: {self.get_header_nodes()}, other: {other.get_header_nodes()}. ')

    current_header_nodes = self.get_header_stage()
    other_header_nodes = other.get_header_stage()

    current_leaf_nodes = self.get_leaves()
    flatten = lambda lst: [item for sublist in lst for item in sublist]
    other_first_level_children = [flatten(c.children) for c in other_header_nodes]  # avoid header stage

    for current_leaf, other_first_level_child in zip(current_leaf_nodes, other_first_level_children, strict=False):
        # Ignore extra spines from other document.
        # But if there are extra spines in the current document, it will raise an exception.
        if current_leaf.token.encoding == TERMINATOR:
            # remove the '*-' token from the current document
            current_leaf_index = current_leaf.parent.children.index(current_leaf)
            current_leaf.parent.children.pop(current_leaf_index)
            current_leaf.parent.children.insert(current_leaf_index, other_first_level_child)

        self.tree.add_node(
            stage=len(self.tree.stages) - 1,  # TODO: check offset 0, +1, -1 ????
            parent=current_leaf,
            token=other_first_level_child.token,
            last_spine_operator_node=other_first_level_child.last_spine_operator_node,
            previous_signature_nodes=other_first_level_child.last_signature_nodes,
            header_node=other_first_level_child.header_node
        )

    return self

append_spines(spines)

    Append the spines directly to current document tree.

    Args:
        spines(list): A list of spines to append.

    Returns: None

    Examples:
        >>> import kernpy as kp
        >>> doc, _ = kp.read('score.krn')
        >>> spines = [
        >>> '4e     4f      4g      4a

4b 4c 4d 4e = = = = ', >>> '4c 4d 4e 4f 4g 4a 4b 4c = = = = ', >>> ] >>> doc.append_spines(spines) None

Source code in kernpy/core/document.py
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
def append_spines(self, spines) -> None:
    """
    Append the spines directly to current document tree.

    Args:
        spines(list): A list of spines to append.

    Returns: None

    Examples:
        >>> import kernpy as kp
        >>> doc, _ = kp.read('score.krn')
        >>> spines = [
        >>> '4e\t4f\t4g\t4a\n4b\t4c\t4d\t4e\n=\t=\t=\t=\n',
        >>> '4c\t4d\t4e\t4f\n4g\t4a\t4b\t4c\n=\t=\t=\t=\n',
       >>> ]
       >>> doc.append_spines(spines)
       None
    """
    raise NotImplementedError()
    if len(spines) != self.get_spine_count():
        raise Exception(f"Spines count mismatch: {len(spines)} != {self.get_spine_count()}")

    for spine in spines:
        return

clone()

Create a deep copy of the Document instance.

Returns: A new instance of Document with the tree copied.

Source code in kernpy/core/document.py
598
599
600
601
602
603
604
605
606
607
608
609
610
def clone(self):
    """
    Create a deep copy of the Document instance.

    Returns: A new instance of Document with the tree copied.

    """
    result = Document(copy(self.tree))
    result.measure_start_tree_stages = copy(self.measure_start_tree_stages)
    result.page_bounding_boxes = copy(self.page_bounding_boxes)
    result.header_stage = copy(self.header_stage)

    return result

frequencies(token_categories=None)

Frequency of tokens in the document.

Parameters:

Name Type Description Default
token_categories Optional[Sequence[TokenCategory]]

If None, all tokens are considered.

None

Returns (Dict): A dictionary with the category and the number of occurrences of each token.

Source code in kernpy/core/document.py
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
def frequencies(self, token_categories: Optional[Sequence[TokenCategory]] = None) -> Dict:
    """
    Frequency of tokens in the document.


    Args:
        token_categories (Optional[Sequence[TokenCategory]]): If None, all tokens are considered.
    Returns (Dict):
        A dictionary with the category and the number of occurrences of each token.

    """
    tokens = self.get_all_tokens(filter_by_categories=token_categories)
    frequencies = {}
    for t in tokens:
        if t.encoding in frequencies:
            frequencies[t.encoding]['occurrences'] += 1
        else:
            frequencies[t.encoding] = {
                'occurrences': 1,
                'category': t.category.name,
            }

    return frequencies

get_all_tokens(filter_by_categories=None)

Parameters:

Name Type Description Default
filter_by_categories Optional[Sequence[TokenCategory]]

A list of categories to filter the tokens. If None, all tokens are returned.

None

Returns:

Type Description
List[AbstractToken]

List[AbstractToken] - A list of all tokens.

Examples:

>>> tokens = document.get_all_tokens()
>>> Document.tokens_to_encodings(tokens)
>>> [type(t) for t in tokens]
[<class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>]
Source code in kernpy/core/document.py
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
def get_all_tokens(self, filter_by_categories: Optional[Sequence[TokenCategory]] = None) -> List[AbstractToken]:
    """
    Args:
        filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

    Returns:
        List[AbstractToken] - A list of all tokens.

    Examples:
        >>> tokens = document.get_all_tokens()
        >>> Document.tokens_to_encodings(tokens)
        >>> [type(t) for t in tokens]
        [<class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>]
    """
    computed_categories = TokenCategory.valid(include=filter_by_categories)
    traversal = TokensTraversal(False, computed_categories)
    self.tree.dfs_iterative(traversal)
    return traversal.tokens

get_all_tokens_encodings(filter_by_categories=None)

Parameters:

Name Type Description Default
filter_by_categories Optional[Sequence[TokenCategory]]

A list of categories to filter the tokens. If None, all tokens are returned.

None

Returns:

Type Description
List[str]

list[str] - A list of all token encodings.

Examples:

>>> tokens = document.get_all_tokens_encodings()
>>> Document.tokens_to_encodings(tokens)
['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
Source code in kernpy/core/document.py
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
def get_all_tokens_encodings(
        self,
        filter_by_categories: Optional[Sequence[TokenCategory]] = None
) -> List[str]:
    """
    Args:
        filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.


    Returns:
        list[str] - A list of all token encodings.

    Examples:
        >>> tokens = document.get_all_tokens_encodings()
        >>> Document.tokens_to_encodings(tokens)
        ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
    """
    tokens = self.get_all_tokens(filter_by_categories)
    return Document.tokens_to_encodings(tokens)

get_first_measure()

Get the index of the first measure of the document.

Returns: (Int) The index of the first measure of the document.

Raises: Exception - If the document has no measures.

Examples:

>>> import kernpy as kp
>>> document, err = kp.read('score.krn')
>>> document.get_first_measure()
1
Source code in kernpy/core/document.py
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
def get_first_measure(self) -> int:
    """
    Get the index of the first measure of the document.

    Returns: (Int) The index of the first measure of the document.

    Raises: Exception - If the document has no measures.

    Examples:
        >>> import kernpy as kp
        >>> document, err = kp.read('score.krn')
        >>> document.get_first_measure()
        1
    """
    if len(self.measure_start_tree_stages) == 0:
        raise Exception('No measures found')

    return self.FIRST_MEASURE

get_header_nodes()

Get the header nodes of the current document.

Returns: List[HeaderToken]: A list with the header nodes of the current document.

Source code in kernpy/core/document.py
680
681
682
683
684
685
686
def get_header_nodes(self) -> List[HeaderToken]:
    """
    Get the header nodes of the current document.

    Returns: List[HeaderToken]: A list with the header nodes of the current document.
    """
    return [token for token in self.get_all_tokens(filter_by_categories=None) if isinstance(token, HeaderToken)]

get_header_stage()

Get the Node list of the header stage.

Returns: (Union[List[Node], List[List[Node]]]) The Node list of the header stage.

Raises: Exception - If the document has no header stage.

Source code in kernpy/core/document.py
370
371
372
373
374
375
376
377
378
379
380
381
def get_header_stage(self) -> Union[List[Node], List[List[Node]]]:
    """
    Get the Node list of the header stage.

    Returns: (Union[List[Node], List[List[Node]]]) The Node list of the header stage.

    Raises: Exception - If the document has no header stage.
    """
    if self.header_stage:
        return self.tree.stages[self.header_stage]
    else:
        raise Exception('No header stage found')

get_leaves()

Get the leaves of the tree.

Returns: (List[Node]) The leaves of the tree.

Source code in kernpy/core/document.py
383
384
385
386
387
388
389
def get_leaves(self) -> List[Node]:
    """
    Get the leaves of the tree.

    Returns: (List[Node]) The leaves of the tree.
    """
    return self.tree.stages[len(self.tree.stages) - 1]

get_metacomments(KeyComment=None, clear=False)

Get all metacomments in the document

Parameters:

Name Type Description Default
KeyComment Optional[str]

Filter by a specific metacomment key: e.g. Use 'COM' to get only comments starting with '!!!COM: '. If None, all metacomments are returned.

None
clear bool

If True, the metacomment key is removed from the comment. E.g. '!!!COM: Coltrane' -> 'Coltrane'. If False, the metacomment key is kept. E.g. '!!!COM: Coltrane' -> '!!!COM: Coltrane'. The clear functionality is equivalent to the following code:

comment = '!!!COM: Coltrane'
clean_comment = comment.replace(f"!!!{KeyComment}: ", "")

Other formats are not supported.

False

Returns: A list of metacomments.

Examples:

>>> document.get_metacomments()
['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
>>> document.get_metacomments(KeyComment='COM')
['!!!COM: Coltrane']
>>> document.get_metacomments(KeyComment='COM', clear=True)
['Coltrane']
>>> document.get_metacomments(KeyComment='non_existing_key')
[]
Source code in kernpy/core/document.py
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
def get_metacomments(self, KeyComment: Optional[str] = None, clear: bool = False) -> List[str]:
    """
    Get all metacomments in the document

    Args:
        KeyComment: Filter by a specific metacomment key: e.g. Use 'COM' to get only comments starting with\
            '!!!COM: '. If None, all metacomments are returned.
        clear: If True, the metacomment key is removed from the comment. E.g. '!!!COM: Coltrane' -> 'Coltrane'.\
            If False, the metacomment key is kept. E.g. '!!!COM: Coltrane' -> '!!!COM: Coltrane'. \
            The clear functionality is equivalent to the following code:
            ```python
            comment = '!!!COM: Coltrane'
            clean_comment = comment.replace(f"!!!{KeyComment}: ", "")
            ```
            Other formats are not supported.

    Returns: A list of metacomments.

    Examples:
        >>> document.get_metacomments()
        ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
        >>> document.get_metacomments(KeyComment='COM')
        ['!!!COM: Coltrane']
        >>> document.get_metacomments(KeyComment='COM', clear=True)
        ['Coltrane']
        >>> document.get_metacomments(KeyComment='non_existing_key')
        []
    """
    traversal = MetacommentsTraversal()
    self.tree.dfs_iterative(traversal)
    result = []
    for metacomment in traversal.metacomments:
        if KeyComment is None or metacomment.encoding.startswith(f"!!!{KeyComment}"):
            new_comment = metacomment.encoding
            if clear:
                new_comment = metacomment.encoding.replace(f"!!!{KeyComment}: ", "")
            result.append(new_comment)

    return result

get_spine_count()

Get the number of spines in the document.

Returns (int): The number of spines in the document.

Source code in kernpy/core/document.py
391
392
393
394
395
396
397
def get_spine_count(self) -> int:
    """
    Get the number of spines in the document.

    Returns (int): The number of spines in the document.
    """
    return len(self.get_header_stage())  # TODO: test refactor

get_spine_ids()

Get the indexes of the current document.

Returns List[int]: A list with the indexes of the current document.

Examples:

>>> document.get_all_spine_indexes()
[0, 1, 2, 3, 4]
Source code in kernpy/core/document.py
688
689
690
691
692
693
694
695
696
697
698
699
def get_spine_ids(self) -> List[int]:
    """
            Get the indexes of the current document.

            Returns List[int]: A list with the indexes of the current document.

            Examples:
                >>> document.get_all_spine_indexes()
                [0, 1, 2, 3, 4]
            """
    header_nodes = self.get_header_nodes()
    return [node.spine_id for node in header_nodes]

get_unique_token_encodings(filter_by_categories=None)

Get unique token encodings.

Parameters:

Name Type Description Default
filter_by_categories Optional[Sequence[TokenCategory]]

A list of categories to filter the tokens. If None, all tokens are returned.

None

Returns: List[str] - A list of unique token encodings.

Source code in kernpy/core/document.py
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
def get_unique_token_encodings(
        self,
        filter_by_categories: Optional[Sequence[TokenCategory]] = None
) -> List[str]:
    """
    Get unique token encodings.

    Args:
        filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

    Returns: List[str] - A list of unique token encodings.

    """
    tokens = self.get_unique_tokens(filter_by_categories)
    return Document.tokens_to_encodings(tokens)

get_unique_tokens(filter_by_categories=None)

Get unique tokens.

Parameters:

Name Type Description Default
filter_by_categories Optional[Sequence[TokenCategory]]

A list of categories to filter the tokens. If None, all tokens are returned.

None

Returns:

Type Description
List[AbstractToken]

List[AbstractToken] - A list of unique tokens.

Source code in kernpy/core/document.py
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
def get_unique_tokens(
        self,
        filter_by_categories: Optional[Sequence[TokenCategory]] = None
) -> List[AbstractToken]:
    """
    Get unique tokens.

    Args:
        filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

    Returns:
        List[AbstractToken] - A list of unique tokens.

    """
    computed_categories = TokenCategory.valid(include=filter_by_categories)
    traversal = TokensTraversal(True, computed_categories)
    self.tree.dfs_iterative(traversal)
    return traversal.tokens

get_voices(clean=False)

Get the voices of the document.

Args clean (bool): Remove the first '!' from the voice name.

Returns: A list of voices.

Examples:

>>> document.get_voices()
['!sax', '!piano', '!bass']
>>> document.get_voices(clean=True)
['sax', 'piano', 'bass']
>>> document.get_voices(clean=False)
['!sax', '!piano', '!bass']
Source code in kernpy/core/document.py
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
def get_voices(self, clean: bool = False):
    """
    Get the voices of the document.

    Args
        clean (bool): Remove the first '!' from the voice name.

    Returns: A list of voices.

    Examples:
        >>> document.get_voices()
        ['!sax', '!piano', '!bass']
        >>> document.get_voices(clean=True)
        ['sax', 'piano', 'bass']
        >>> document.get_voices(clean=False)
        ['!sax', '!piano', '!bass']
    """
    from kernpy.core import TokenCategory
    voices = self.get_all_tokens(filter_by_categories=[TokenCategory.INSTRUMENTS])

    if clean:
        voices = [voice[1:] for voice in voices]
    return voices

match(a, b, *, check_core_spines_only=False) classmethod

Match two documents. Two documents match if they have the same spine structure.

Parameters:

Name Type Description Default
a Document

The first document.

required
b Document

The second document.

required
check_core_spines_only Optional[bool]

If True, only the core spines (kern and mens) are checked. If False, all spines are checked.

False

Returns: True if the documents match, False otherwise.

Examples:

Source code in kernpy/core/document.py
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
@classmethod
def match(cls, a: 'Document', b: 'Document', *, check_core_spines_only: Optional[bool] = False) -> bool:
    """
    Match two documents. Two documents match if they have the same spine structure.

    Args:
        a (Document): The first document.
        b (Document): The second document.
        check_core_spines_only (Optional[bool]): If True, only the core spines (**kern and **mens) are checked. If False, all spines are checked.

    Returns: True if the documents match, False otherwise.

    Examples:

    """
    if check_core_spines_only:
        return [token.encoding for token in a.get_header_nodes() if token.encoding in CORE_HEADERS] \
            == [token.encoding for token in b.get_header_nodes() if token.encoding in CORE_HEADERS]
    else:
        return [token.encoding for token in a.get_header_nodes()] \
            == [token.encoding for token in b.get_header_nodes()]

measures_count()

Get the index of the last measure of the document.

Returns: (Int) The index of the last measure of the document.

Raises: Exception - If the document has no measures.

Examples:

>>> document, _ = kernpy.read('score.krn')
>>> document.measures_count()
10
>>> for i in range(document.get_first_measure(), document.measures_count() + 1):
>>>   options = kernpy.ExportOptions(from_measure=i, to_measure=i+4)
Source code in kernpy/core/document.py
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
def measures_count(self) -> int:
    """
    Get the index of the last measure of the document.

    Returns: (Int) The index of the last measure of the document.

    Raises: Exception - If the document has no measures.

    Examples:
        >>> document, _ = kernpy.read('score.krn')
        >>> document.measures_count()
        10
        >>> for i in range(document.get_first_measure(), document.measures_count() + 1):
        >>>   options = kernpy.ExportOptions(from_measure=i, to_measure=i+4)
    """
    if len(self.measure_start_tree_stages) == 0:
        raise Exception('No measures found')

    return len(self.measure_start_tree_stages)

split()

Split the current document into a list of documents, one for each kern spine. Each resulting document will contain one kern spine along with all non-kern spines.

Returns:

Type Description
List['Document']

List['Document']: A list of documents, where each document contains one **kern spine

List['Document']

and all non-kern spines from the original document.

Examples:

>>> document.split()
[<Document: score.krn>, <Document: score.krn>, <Document: score.krn>]
Source code in kernpy/core/document.py
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
def split(self) -> List['Document']:
    """
    Split the current document into a list of documents, one for each **kern spine.
    Each resulting document will contain one **kern spine along with all non-kern spines.

    Returns:
        List['Document']: A list of documents, where each document contains one **kern spine
        and all non-kern spines from the original document.

    Examples:
        >>> document.split()
        [<Document: score.krn>, <Document: score.krn>, <Document: score.krn>]
    """
    raise NotImplementedError
    new_documents = []
    self_document_copy = deepcopy(self)
    kern_header_nodes = [node for node in self_document_copy.get_header_nodes() if node.encoding == '**kern']
    other_header_nodes = [node for node in self_document_copy.get_header_nodes() if node.encoding != '**kern']
    spine_ids = self_document_copy.get_spine_ids()

    for header_node in kern_header_nodes:
        if header_node.spine_id not in spine_ids:
            continue

        spine_ids.remove(header_node.spine_id)

        new_tree = deepcopy(self.tree)
        prev_node = new_tree.root
        while not isinstance(prev_node, HeaderToken):
            prev_node = prev_node.children[0]

        if not prev_node or not isinstance(prev_node, HeaderToken):
            raise Exception(f'Header node not found: {prev_node} in {header_node}')

        new_children = list(filter(lambda x: x.spine_id == header_node.spine_id, prev_node.children))
        new_tree.root = new_children

        new_document = Document(new_tree)

        new_documents.append(new_document)

    return new_documents

to_concat(first_doc, second_doc, deep_copy=True) classmethod

Concatenate two documents.

Parameters:

Name Type Description Default
first_doc Document

The first document.

required
second_doc Document

The second document.

required
deep_copy bool

If True, the documents are deep copied. If False, the documents are shallow copied.

True

Returns: A new instance of Document with the documents concatenated.

Source code in kernpy/core/document.py
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
@classmethod
def to_concat(cls, first_doc: 'Document', second_doc: 'Document', deep_copy: bool = True) -> 'Document':
    """
    Concatenate two documents.

    Args:
        first_doc (Document): The first document.
        second_doc (Document: The second document.
        deep_copy (bool): If True, the documents are deep copied. If False, the documents are shallow copied.

    Returns: A new instance of Document with the documents concatenated.
    """
    first_doc = first_doc.clone() if deep_copy else first_doc
    second_doc = second_doc.clone() if deep_copy else second_doc
    first_doc.add(second_doc)

    return first_doc

to_transposed(interval, direction=Direction.UP.value)

Create a new document with the transposed notes without modifying the original document.

Parameters:

Name Type Description Default
interval str

The name of the interval to transpose. It can be 'P4', 'P5', 'M2', etc. Check the kp.AVAILABLE_INTERVALS for the available intervals.

required
direction str

The direction to transpose. It can be 'up' or 'down'.

UP.value

Returns:

Source code in kernpy/core/document.py
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
def to_transposed(self, interval: str, direction: str = Direction.UP.value) -> 'Document':
    """
    Create a new document with the transposed notes without modifying the original document.

    Args:
        interval (str): The name of the interval to transpose. It can be 'P4', 'P5', 'M2', etc. Check the \
         kp.AVAILABLE_INTERVALS for the available intervals.
        direction (str): The direction to transpose. It can be 'up' or 'down'.

    Returns:

    """
    if interval not in AVAILABLE_INTERVALS:
        raise ValueError(
            f"Interval {interval!r} is not available. "
            f"Available intervals are: {AVAILABLE_INTERVALS}"
        )

    if direction not in (Direction.UP.value, Direction.DOWN.value):
        raise ValueError(
            f"Direction {direction!r} is not available. "
            f"Available directions are: "
            f"{Direction.UP.value!r}, {Direction.DOWN.value!r}"
        )

    new_document = self.clone()

    # BFS through the tree
    root = new_document.tree.root
    queue = Queue()
    queue.put(root)

    while not queue.empty():
        node = queue.get()

        if isinstance(node.token, NoteRestToken):
            orig_token = node.token

            new_subtokens = []
            transposed_pitch_encoding = None

            # Transpose each pitch subtoken in the pitch–duration list
            for subtoken in orig_token.pitch_duration_subtokens:
                if subtoken.category == TokenCategory.PITCH:
                    # transpose() returns a new pitch subtoken
                    tp = transpose(
                        input_encoding=subtoken.encoding,
                        interval=IntervalsByName[interval],
                        direction=direction,
                        input_format=NotationEncoding.HUMDRUM.value,
                        output_format=NotationEncoding.HUMDRUM.value,
                    )
                    new_subtokens.append(Subtoken(tp, subtoken.category))
                    transposed_pitch_encoding = tp
                else:
                    # leave duration subtokens untouched
                    new_subtokens.append(Subtoken(subtoken.encoding, subtoken.category))

            # Replace the node’s token with a new NoteRestToken
            node.token = NoteRestToken(
                encoding=transposed_pitch_encoding,
                pitch_duration_subtokens=new_subtokens,
                decoration_subtokens=orig_token.decoration_subtokens,
            )

        # enqueue children
        for child in node.children:
            queue.put(child)

    # Return the transposed clone
    return new_document

tokens_to_encodings(tokens) classmethod

Get the encodings of a list of tokens.

The method is equivalent to the following code

tokens = kp.get_all_tokens() [token.encoding for token in tokens if token.encoding is not None]

Parameters:

Name Type Description Default
tokens Sequence[AbstractToken]

list - A list of tokens.

required

Returns: List[str] - A list of token encodings.

Examples:

>>> tokens = document.get_all_tokens()
>>> Document.tokens_to_encodings(tokens)
['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
Source code in kernpy/core/document.py
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
@classmethod
def tokens_to_encodings(cls, tokens: Sequence[AbstractToken]):
    """
    Get the encodings of a list of tokens.

    The method is equivalent to the following code:
        >>> tokens = kp.get_all_tokens()
        >>> [token.encoding for token in tokens if token.encoding is not None]

    Args:
        tokens (Sequence[AbstractToken]): list - A list of tokens.

    Returns: List[str] - A list of token encodings.

    Examples:
        >>> tokens = document.get_all_tokens()
        >>> Document.tokens_to_encodings(tokens)
        ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
    """
    encodings = [token.encoding for token in tokens if token.encoding is not None]
    return encodings

Duration

Bases: ABC

Represents the duration of a note or a rest.

The duration is represented using the Humdrum Kern format. The duration is a number that represents the number of units of the duration.

The duration of a whole note is 1, half note is 2, quarter note is 4, eighth note is 8, etc.

The duration of a note is represented by a number. The duration of a rest is also represented by a number.

This class do not limit the duration ranges.

In the following example, the duration is represented by the number '2'.

**kern
*clefG2
2c          // whole note
4c          // half note
8c          // quarter note
16c         // eighth note
*-
Source code in kernpy/core/tokens.py
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
class Duration(ABC):
    """
    Represents the duration of a note or a rest.

    The duration is represented using the Humdrum Kern format.
    The duration is a number that represents the number of units of the duration.

    The duration of a whole note is 1, half note is 2, quarter note is 4, eighth note is 8, etc.

    The duration of a note is represented by a number. The duration of a rest is also represented by a number.

    This class do not limit the duration ranges.

    In the following example, the duration is represented by the number '2'.
    ```
    **kern
    *clefG2
    2c          // whole note
    4c          // half note
    8c          // quarter note
    16c         // eighth note
    *-
    ```
    """

    def __init__(self, raw_duration):
        self.encoding = str(raw_duration)

    @abstractmethod
    def modify(self, ratio: int):
        pass

    @abstractmethod
    def __deepcopy__(self, memo=None):
        pass

    @abstractmethod
    def __eq__(self, other):
        pass

    @abstractmethod
    def __ne__(self, other):
        pass

    @abstractmethod
    def __gt__(self, other):
        pass

    @abstractmethod
    def __lt__(self, other):
        pass

    @abstractmethod
    def __ge__(self, other):
        pass

    @abstractmethod
    def __le__(self, other):
        pass

    @abstractmethod
    def __str__(self):
        pass

DurationClassical

Bases: Duration

Represents the duration in classical notation of a note or a rest.

Source code in kernpy/core/tokens.py
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
class DurationClassical(Duration):
    """
    Represents the duration in classical notation of a note or a rest.
    """

    def __init__(self, duration: int):
        """
        Create a new Duration object.

        Args:
            duration (str): duration representation in Humdrum Kern format

        Examples:
            >>> duration = DurationClassical(2)
            True
            >>> duration = DurationClassical(4)
            True
            >>> duration = DurationClassical(32)
            True
            >>> duration = DurationClassical(1)
            True
            >>> duration = DurationClassical(0)
            False
            >>> duration = DurationClassical(-2)
            False
            >>> duration = DurationClassical(3)
            False
            >>> duration = DurationClassical(7)
            False
        """
        super().__init__(duration)
        if not DurationClassical.__is_valid_duration(duration):
            raise ValueError(f'Bad duration: {duration} was provided.')

        self.duration = int(duration)

    def modify(self, ratio: int):
        """
        Modify the duration of a note or a rest of the current object.

        Args:
            ratio (int): The factor to modify the duration. The factor must be greater than 0.

        Returns (DurationClassical): The new duration object with the modified duration.

        Examples:
            >>> duration = DurationClassical(2)
            >>> new_duration = duration.modify(2)
            >>> new_duration.duration
            4
            >>> duration = DurationClassical(2)
            >>> new_duration = duration.modify(0)
            Traceback (most recent call last):
            ...
            ValueError: Invalid factor provided: 0. The factor must be greater than 0.
            >>> duration = DurationClassical(2)
            >>> new_duration = duration.modify(-2)
            Traceback (most recent call last):
            ...
            ValueError: Invalid factor provided: -2. The factor must be greater than 0.
        """
        if not isinstance(ratio, int):
            raise ValueError(f'Invalid factor provided: {ratio}. The factor must be an integer.')
        if ratio <= 0:
            raise ValueError(f'Invalid factor provided: {ratio}. The factor must be greater than 0.')

        return copy.deepcopy(DurationClassical(self.duration * ratio))

    def __deepcopy__(self, memo=None):
        if memo is None:
            memo = {}

        new_instance = DurationClassical(self.duration)
        new_instance.duration = self.duration
        return new_instance

    def __str__(self):
        return f'{self.duration}'

    def __eq__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other (DurationClassical): The other duration to compare

        Returns (bool): True if the durations are equal, False otherwise


        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(2)
            >>> duration == duration2
            True
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration == duration2
            False
        """
        if not isinstance(other, DurationClassical):
            return False
        return self.duration == other.duration

    def __ne__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other (DurationClassical): The other duration to compare

        Returns (bool):
            True if the durations are different, False otherwise

        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(2)
            >>> duration != duration2
            False
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration != duration2
            True
        """
        return not self.__eq__(other)

    def __gt__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other: The other duration to compare

        Returns (bool):
            True if this duration is higher than the other, False otherwise

        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration > duration2
            False
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(2)
            >>> duration > duration2
            True
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(4)
            >>> duration > duration2
            False
        """
        if not isinstance(other, DurationClassical):
            raise ValueError(f'Invalid comparison: > operator can not be used to compare duration with {type(other)}')
        return self.duration > other.duration

    def __lt__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other (DurationClassical): The other duration to compare

        Returns (bool):
            True if this duration is lower than the other, False otherwise

        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration < duration2
            True
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(2)
            >>> duration < duration2
            False
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(4)
            >>> duration < duration2
            False
        """
        if not isinstance(other, DurationClassical):
            raise ValueError(f'Invalid comparison: < operator can not be used to compare duration with {type(other)}')
        return self.duration < other.duration

    def __ge__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other (DurationClassical): The other duration to compare

        Returns (bool):
            True if this duration is higher or equal than the other, False otherwise

        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration >= duration2
            False
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(2)
            >>> duration >= duration2
            True
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(4)
            >>> duration >= duration2
            True
        """
        return self.__gt__(other) or self.__eq__(other)

    def __le__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other (DurationClassical): The other duration to compare

        Returns:
            True if this duration is lower or equal than the other, False otherwise

        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration <= duration2
            True
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(2)
            >>> duration <= duration2
            False
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(4)
            >>> duration <= duration2
            True
        """
        return self.__lt__(other) or self.__eq__(other)

    @classmethod
    def __is_valid_duration(cls, duration: int) -> bool:
        try:
            duration = int(duration)
            if duration is None or duration <= 0:
                return False

            return duration > 0 and (duration % 2 == 0 or duration == 1)
        except ValueError:
            return False

__eq__(other)

Compare two durations.

Parameters:

Name Type Description Default
other DurationClassical

The other duration to compare

required

Returns (bool): True if the durations are equal, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(2)
>>> duration == duration2
True
>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration == duration2
False
Source code in kernpy/core/tokens.py
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
def __eq__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other (DurationClassical): The other duration to compare

    Returns (bool): True if the durations are equal, False otherwise


    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(2)
        >>> duration == duration2
        True
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration == duration2
        False
    """
    if not isinstance(other, DurationClassical):
        return False
    return self.duration == other.duration

__ge__(other)

Compare two durations.

Parameters:

Name Type Description Default
other DurationClassical

The other duration to compare

required

Returns (bool): True if this duration is higher or equal than the other, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration >= duration2
False
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(2)
>>> duration >= duration2
True
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(4)
>>> duration >= duration2
True
Source code in kernpy/core/tokens.py
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
def __ge__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other (DurationClassical): The other duration to compare

    Returns (bool):
        True if this duration is higher or equal than the other, False otherwise

    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration >= duration2
        False
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(2)
        >>> duration >= duration2
        True
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(4)
        >>> duration >= duration2
        True
    """
    return self.__gt__(other) or self.__eq__(other)

__gt__(other)

Compare two durations.

Parameters:

Name Type Description Default
other 'DurationClassical'

The other duration to compare

required

Returns (bool): True if this duration is higher than the other, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration > duration2
False
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(2)
>>> duration > duration2
True
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(4)
>>> duration > duration2
False
Source code in kernpy/core/tokens.py
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
def __gt__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other: The other duration to compare

    Returns (bool):
        True if this duration is higher than the other, False otherwise

    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration > duration2
        False
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(2)
        >>> duration > duration2
        True
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(4)
        >>> duration > duration2
        False
    """
    if not isinstance(other, DurationClassical):
        raise ValueError(f'Invalid comparison: > operator can not be used to compare duration with {type(other)}')
    return self.duration > other.duration

__init__(duration)

Create a new Duration object.

Parameters:

Name Type Description Default
duration str

duration representation in Humdrum Kern format

required

Examples:

>>> duration = DurationClassical(2)
True
>>> duration = DurationClassical(4)
True
>>> duration = DurationClassical(32)
True
>>> duration = DurationClassical(1)
True
>>> duration = DurationClassical(0)
False
>>> duration = DurationClassical(-2)
False
>>> duration = DurationClassical(3)
False
>>> duration = DurationClassical(7)
False
Source code in kernpy/core/tokens.py
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
def __init__(self, duration: int):
    """
    Create a new Duration object.

    Args:
        duration (str): duration representation in Humdrum Kern format

    Examples:
        >>> duration = DurationClassical(2)
        True
        >>> duration = DurationClassical(4)
        True
        >>> duration = DurationClassical(32)
        True
        >>> duration = DurationClassical(1)
        True
        >>> duration = DurationClassical(0)
        False
        >>> duration = DurationClassical(-2)
        False
        >>> duration = DurationClassical(3)
        False
        >>> duration = DurationClassical(7)
        False
    """
    super().__init__(duration)
    if not DurationClassical.__is_valid_duration(duration):
        raise ValueError(f'Bad duration: {duration} was provided.')

    self.duration = int(duration)

__le__(other)

Compare two durations.

Parameters:

Name Type Description Default
other DurationClassical

The other duration to compare

required

Returns:

Type Description
bool

True if this duration is lower or equal than the other, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration <= duration2
True
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(2)
>>> duration <= duration2
False
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(4)
>>> duration <= duration2
True
Source code in kernpy/core/tokens.py
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
def __le__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other (DurationClassical): The other duration to compare

    Returns:
        True if this duration is lower or equal than the other, False otherwise

    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration <= duration2
        True
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(2)
        >>> duration <= duration2
        False
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(4)
        >>> duration <= duration2
        True
    """
    return self.__lt__(other) or self.__eq__(other)

__lt__(other)

Compare two durations.

Parameters:

Name Type Description Default
other DurationClassical

The other duration to compare

required

Returns (bool): True if this duration is lower than the other, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration < duration2
True
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(2)
>>> duration < duration2
False
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(4)
>>> duration < duration2
False
Source code in kernpy/core/tokens.py
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
def __lt__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other (DurationClassical): The other duration to compare

    Returns (bool):
        True if this duration is lower than the other, False otherwise

    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration < duration2
        True
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(2)
        >>> duration < duration2
        False
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(4)
        >>> duration < duration2
        False
    """
    if not isinstance(other, DurationClassical):
        raise ValueError(f'Invalid comparison: < operator can not be used to compare duration with {type(other)}')
    return self.duration < other.duration

__ne__(other)

Compare two durations.

Parameters:

Name Type Description Default
other DurationClassical

The other duration to compare

required

Returns (bool): True if the durations are different, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(2)
>>> duration != duration2
False
>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration != duration2
True
Source code in kernpy/core/tokens.py
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
def __ne__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other (DurationClassical): The other duration to compare

    Returns (bool):
        True if the durations are different, False otherwise

    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(2)
        >>> duration != duration2
        False
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration != duration2
        True
    """
    return not self.__eq__(other)

modify(ratio)

Modify the duration of a note or a rest of the current object.

Parameters:

Name Type Description Default
ratio int

The factor to modify the duration. The factor must be greater than 0.

required

Returns (DurationClassical): The new duration object with the modified duration.

Examples:

>>> duration = DurationClassical(2)
>>> new_duration = duration.modify(2)
>>> new_duration.duration
4
>>> duration = DurationClassical(2)
>>> new_duration = duration.modify(0)
Traceback (most recent call last):
...
ValueError: Invalid factor provided: 0. The factor must be greater than 0.
>>> duration = DurationClassical(2)
>>> new_duration = duration.modify(-2)
Traceback (most recent call last):
...
ValueError: Invalid factor provided: -2. The factor must be greater than 0.
Source code in kernpy/core/tokens.py
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
def modify(self, ratio: int):
    """
    Modify the duration of a note or a rest of the current object.

    Args:
        ratio (int): The factor to modify the duration. The factor must be greater than 0.

    Returns (DurationClassical): The new duration object with the modified duration.

    Examples:
        >>> duration = DurationClassical(2)
        >>> new_duration = duration.modify(2)
        >>> new_duration.duration
        4
        >>> duration = DurationClassical(2)
        >>> new_duration = duration.modify(0)
        Traceback (most recent call last):
        ...
        ValueError: Invalid factor provided: 0. The factor must be greater than 0.
        >>> duration = DurationClassical(2)
        >>> new_duration = duration.modify(-2)
        Traceback (most recent call last):
        ...
        ValueError: Invalid factor provided: -2. The factor must be greater than 0.
    """
    if not isinstance(ratio, int):
        raise ValueError(f'Invalid factor provided: {ratio}. The factor must be an integer.')
    if ratio <= 0:
        raise ValueError(f'Invalid factor provided: {ratio}. The factor must be greater than 0.')

    return copy.deepcopy(DurationClassical(self.duration * ratio))

DurationMensural

Bases: Duration

Represents the duration in mensural notation of a note or a rest.

Source code in kernpy/core/tokens.py
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
class DurationMensural(Duration):
    """
    Represents the duration in mensural notation of a note or a rest.
    """

    def __init__(self, duration):
        super().__init__(duration)
        self.duration = duration

    def __eq__(self, other):
        raise NotImplementedError()

    def modify(self, ratio: int):
        raise NotImplementedError()

    def __deepcopy__(self, memo=None):
        raise NotImplementedError()

    def __gt__(self, other):
        raise NotImplementedError()

    def __lt__(self, other):
        raise NotImplementedError()

    def __le__(self, other):
        raise NotImplementedError()

    def __str__(self):
        raise NotImplementedError()

    def __ge__(self, other):
        raise NotImplementedError()

    def __ne__(self, other):
        raise NotImplementedError()

DynSpineImporter

Bases: SpineImporter

Source code in kernpy/core/dyn_importer.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
class DynSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()

    def import_token(self, encoding: str) -> Token:
        # TODO: Find out differences between **dyn vs **dynam and change this class. Using the same dor both for now.
        dynam_importer = DynamSpineImporter()
        return dynam_importer.import_token(encoding)

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/dyn_importer.py
12
13
14
15
16
17
18
19
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

DynamSpineImporter

Bases: SpineImporter

Source code in kernpy/core/dynam_spine_importer.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class DynamSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()  # TODO: Create a custom functional listener for DynamSpineImporter

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.DYNAMICS)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if not any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.DYNAMICS)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/dynam_spine_importer.py
11
12
13
14
15
16
17
18
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

EkernTokenizer

Bases: Tokenizer

EkernTokenizer converts a Token into an eKern (Extended **kern) string representation. This format use a '@' separator for the main tokens and a '·' separator for the decorations tokens.

Source code in kernpy/core/tokenizers.py
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
class EkernTokenizer(Tokenizer):
    """
    EkernTokenizer converts a Token into an eKern (Extended **kern) string representation. This format use a '@' separator for the \
    main tokens and a '·' separator for the decorations tokens.
    """

    def __init__(self, *, token_categories: Set['TokenCategory']):
        """
        Create a new EkernTokenizer

        Args:
            token_categories (List[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
        """
        super().__init__(token_categories=token_categories)

    def tokenize(self, token: Token) -> str:
        """
        Tokenize a token into an eKern string representation.
        Args:
            token (Token): Token to be tokenized.

        Returns (str): eKern string representation.

        Examples:
            >>> token.encoding
            '2@.@bb@-·_·L'
            >>> EkernTokenizer().tokenize(token)
            '2@.@bb@-·_·L'

        """
        return token.export(filter_categories=lambda cat: cat in self.token_categories)

__init__(*, token_categories)

Create a new EkernTokenizer

Parameters:

Name Type Description Default
token_categories List[TokenCategory]

List of categories to be tokenized. If None will raise an exception.

required
Source code in kernpy/core/tokenizers.py
120
121
122
123
124
125
126
127
def __init__(self, *, token_categories: Set['TokenCategory']):
    """
    Create a new EkernTokenizer

    Args:
        token_categories (List[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
    """
    super().__init__(token_categories=token_categories)

tokenize(token)

Tokenize a token into an eKern string representation. Args: token (Token): Token to be tokenized.

Returns (str): eKern string representation.

Examples:

>>> token.encoding
'2@.@bb@-·_·L'
>>> EkernTokenizer().tokenize(token)
'2@.@bb@-·_·L'
Source code in kernpy/core/tokenizers.py
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
def tokenize(self, token: Token) -> str:
    """
    Tokenize a token into an eKern string representation.
    Args:
        token (Token): Token to be tokenized.

    Returns (str): eKern string representation.

    Examples:
        >>> token.encoding
        '2@.@bb@-·_·L'
        >>> EkernTokenizer().tokenize(token)
        '2@.@bb@-·_·L'

    """
    return token.export(filter_categories=lambda cat: cat in self.token_categories)

Encoding

Bases: Enum

Options for exporting a kern file.

Example

import kernpy as kp

Load a file

doc, _ = kp.load('path/to/file.krn')

Save the file using the specified encoding

exported_content = kp.dumps(encoding=kp.Encoding.normalizedKern)

Source code in kernpy/core/tokenizers.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
class Encoding(Enum):  # TODO: Eventually, polymorphism will be used to export different types of kern files
    """
    Options for exporting a kern file.

    Example:
        >>> import kernpy as kp
        >>> # Load a file
        >>> doc, _ = kp.load('path/to/file.krn')
        >>>
        >>> # Save the file using the specified encoding
        >>> exported_content = kp.dumps(encoding=kp.Encoding.normalizedKern)
    """
    eKern = 'ekern'
    normalizedKern = 'kern'
    bKern = 'bkern'
    bEkern = 'bekern'

    def prefix(self) -> str:
        """
        Get the prefix of the kern type.

        Returns (str): Prefix of the kern type.
        """
        if self == Encoding.eKern:
            return 'e'
        elif self == Encoding.normalizedKern:
            return ''
        elif self == Encoding.bKern:
            return 'b'
        elif self == Encoding.bEkern:
            return 'be'
        else:
            raise ValueError(f'Unknown kern type: {self}. '
                             f'Supported types are: '
                             f"{'-'.join([kern_type.name for kern_type in Encoding.__members__.values()])}")

prefix()

Get the prefix of the kern type.

Returns (str): Prefix of the kern type.

Source code in kernpy/core/tokenizers.py
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
def prefix(self) -> str:
    """
    Get the prefix of the kern type.

    Returns (str): Prefix of the kern type.
    """
    if self == Encoding.eKern:
        return 'e'
    elif self == Encoding.normalizedKern:
        return ''
    elif self == Encoding.bKern:
        return 'b'
    elif self == Encoding.bEkern:
        return 'be'
    else:
        raise ValueError(f'Unknown kern type: {self}. '
                         f'Supported types are: '
                         f"{'-'.join([kern_type.name for kern_type in Encoding.__members__.values()])}")

ExportOptions

ExportOptions class.

Store the options to export a **kern file.

Source code in kernpy/core/exporter.py
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
class ExportOptions:
    """
    `ExportOptions` class.

    Store the options to export a **kern file.
    """

    def __init__(
            self,
            spine_types: [] = None,
            token_categories: [] = None,
            from_measure: int = None,
            to_measure: int = None,
            kern_type: Encoding = Encoding.normalizedKern,
            instruments: [] = None,
            show_measure_numbers: bool = False,
            spine_ids: [int] = None
    ):
        """
        Create a new ExportOptions object.

        Args:
            spine_types (Iterable): **kern, **mens, etc...
            token_categories (Iterable): TokenCategory
            from_measure (int): The measure to start exporting. When None, the exporter will start from the beginning of the file. The first measure is 1
            to_measure (int): The measure to end exporting. When None, the exporter will end at the end of the file.
            kern_type (Encoding): The type of the kern file to export.
            instruments (Iterable): The instruments to export. When None, all the instruments will be exported.
            show_measure_numbers (Bool): Show the measure numbers in the exported file.
            spine_ids (Iterable): The ids of the spines to export. When None, all the spines will be exported. Spines ids start from 0 and they are increased by 1.

        Example:
            >>> import kernpy

            Create the importer and read the file
            >>> hi = Importer()
            >>> document = hi.import_file('file.krn')
            >>> exporter = Exporter()

            Export the file with the specified options
            >>> options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
            >>> exported_data = exporter.export_string(document, options)

            Export only the lyrics
            >>> options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LYRICS])
            >>> exported_data = exporter.export_string(document, options)

            Export the comments
            >>> options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LINE_COMMENTS, TokenCategory.FIELD_COMMENTS])
            >>> exported_data = exporter.export_string(document, options)

            Export using the eKern version
            >>> options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES, kern_type=Encoding.eKern)
            >>> exported_data = exporter.export_string(document, options)

        """
        self.spine_types = spine_types if spine_types is not None else deepcopy(HEADERS)
        self.from_measure = from_measure
        self.to_measure = to_measure
        self.token_categories = token_categories if token_categories is not None else [c for c in TokenCategory]
        self.kern_type = kern_type
        self.instruments = instruments
        self.show_measure_numbers = show_measure_numbers
        self.spine_ids = spine_ids  # When exporting, if spine_ids=None all the spines will be exported.

    def __eq__(self, other: 'ExportOptions') -> bool:
        """
        Compare two ExportOptions objects.

        Args:
            other: The other ExportOptions object to compare.

        Returns (bool):
            True if the objects are equal, False otherwise.

        Examples:
            >>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
            >>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
            >>> options1 == options2
            True

            >>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
            >>> options1 == options3
            False
        """
        return self.spine_types == other.spine_types and \
            self.token_categories == other.token_categories and \
            self.from_measure == other.from_measure and \
            self.to_measure == other.to_measure and \
            self.kern_type == other.kern_type and \
            self.instruments == other.instruments and \
            self.show_measure_numbers == other.show_measure_numbers and \
            self.spine_ids == other.spine_ids

    def __ne__(self, other: 'ExportOptions') -> bool:
        """
        Compare two ExportOptions objects.

        Args:
            other (ExportOptions): The other ExportOptions object to compare.

        Returns (bool):
            True if the objects are not equal, False otherwise.

        Examples:
            >>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
            >>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
            >>> options1 != options2
            False

            >>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
            >>> options1 != options3
            True
        """
        return not self.__eq__(other)

    @classmethod
    def default(cls):
        return cls(
            spine_types=deepcopy(HEADERS),
            token_categories=[c for c in TokenCategory],
            from_measure=None,
            to_measure=None,
            kern_type=Encoding.normalizedKern,
            instruments=None,
            show_measure_numbers=False,
            spine_ids=None
        )

__eq__(other)

Compare two ExportOptions objects.

Parameters:

Name Type Description Default
other 'ExportOptions'

The other ExportOptions object to compare.

required

Returns (bool): True if the objects are equal, False otherwise.

Examples:

>>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
>>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
>>> options1 == options2
True
>>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
>>> options1 == options3
False
Source code in kernpy/core/exporter.py
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
def __eq__(self, other: 'ExportOptions') -> bool:
    """
    Compare two ExportOptions objects.

    Args:
        other: The other ExportOptions object to compare.

    Returns (bool):
        True if the objects are equal, False otherwise.

    Examples:
        >>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
        >>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
        >>> options1 == options2
        True

        >>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
        >>> options1 == options3
        False
    """
    return self.spine_types == other.spine_types and \
        self.token_categories == other.token_categories and \
        self.from_measure == other.from_measure and \
        self.to_measure == other.to_measure and \
        self.kern_type == other.kern_type and \
        self.instruments == other.instruments and \
        self.show_measure_numbers == other.show_measure_numbers and \
        self.spine_ids == other.spine_ids

__init__(spine_types=None, token_categories=None, from_measure=None, to_measure=None, kern_type=Encoding.normalizedKern, instruments=None, show_measure_numbers=False, spine_ids=None)

Create a new ExportOptions object.

Parameters:

Name Type Description Default
spine_types Iterable

kern, mens, etc...

None
token_categories Iterable

TokenCategory

None
from_measure int

The measure to start exporting. When None, the exporter will start from the beginning of the file. The first measure is 1

None
to_measure int

The measure to end exporting. When None, the exporter will end at the end of the file.

None
kern_type Encoding

The type of the kern file to export.

normalizedKern
instruments Iterable

The instruments to export. When None, all the instruments will be exported.

None
show_measure_numbers Bool

Show the measure numbers in the exported file.

False
spine_ids Iterable

The ids of the spines to export. When None, all the spines will be exported. Spines ids start from 0 and they are increased by 1.

None
Example

import kernpy

Create the importer and read the file

hi = Importer() document = hi.import_file('file.krn') exporter = Exporter()

Export the file with the specified options

options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES) exported_data = exporter.export_string(document, options)

Export only the lyrics

options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LYRICS]) exported_data = exporter.export_string(document, options)

Export the comments

options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LINE_COMMENTS, TokenCategory.FIELD_COMMENTS]) exported_data = exporter.export_string(document, options)

Export using the eKern version

options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES, kern_type=Encoding.eKern) exported_data = exporter.export_string(document, options)

Source code in kernpy/core/exporter.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
def __init__(
        self,
        spine_types: [] = None,
        token_categories: [] = None,
        from_measure: int = None,
        to_measure: int = None,
        kern_type: Encoding = Encoding.normalizedKern,
        instruments: [] = None,
        show_measure_numbers: bool = False,
        spine_ids: [int] = None
):
    """
    Create a new ExportOptions object.

    Args:
        spine_types (Iterable): **kern, **mens, etc...
        token_categories (Iterable): TokenCategory
        from_measure (int): The measure to start exporting. When None, the exporter will start from the beginning of the file. The first measure is 1
        to_measure (int): The measure to end exporting. When None, the exporter will end at the end of the file.
        kern_type (Encoding): The type of the kern file to export.
        instruments (Iterable): The instruments to export. When None, all the instruments will be exported.
        show_measure_numbers (Bool): Show the measure numbers in the exported file.
        spine_ids (Iterable): The ids of the spines to export. When None, all the spines will be exported. Spines ids start from 0 and they are increased by 1.

    Example:
        >>> import kernpy

        Create the importer and read the file
        >>> hi = Importer()
        >>> document = hi.import_file('file.krn')
        >>> exporter = Exporter()

        Export the file with the specified options
        >>> options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
        >>> exported_data = exporter.export_string(document, options)

        Export only the lyrics
        >>> options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LYRICS])
        >>> exported_data = exporter.export_string(document, options)

        Export the comments
        >>> options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LINE_COMMENTS, TokenCategory.FIELD_COMMENTS])
        >>> exported_data = exporter.export_string(document, options)

        Export using the eKern version
        >>> options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES, kern_type=Encoding.eKern)
        >>> exported_data = exporter.export_string(document, options)

    """
    self.spine_types = spine_types if spine_types is not None else deepcopy(HEADERS)
    self.from_measure = from_measure
    self.to_measure = to_measure
    self.token_categories = token_categories if token_categories is not None else [c for c in TokenCategory]
    self.kern_type = kern_type
    self.instruments = instruments
    self.show_measure_numbers = show_measure_numbers
    self.spine_ids = spine_ids  # When exporting, if spine_ids=None all the spines will be exported.

__ne__(other)

Compare two ExportOptions objects.

Parameters:

Name Type Description Default
other ExportOptions

The other ExportOptions object to compare.

required

Returns (bool): True if the objects are not equal, False otherwise.

Examples:

>>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
>>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
>>> options1 != options2
False
>>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
>>> options1 != options3
True
Source code in kernpy/core/exporter.py
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
def __ne__(self, other: 'ExportOptions') -> bool:
    """
    Compare two ExportOptions objects.

    Args:
        other (ExportOptions): The other ExportOptions object to compare.

    Returns (bool):
        True if the objects are not equal, False otherwise.

    Examples:
        >>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
        >>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
        >>> options1 != options2
        False

        >>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
        >>> options1 != options3
        True
    """
    return not self.__eq__(other)

Exporter

Source code in kernpy/core/exporter.py
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
class Exporter:
    def export_string(self, document: Document, options: ExportOptions) -> str:
        self.export_options_validator(document, options)

        rows = []

        if options.to_measure is not None and options.to_measure < len(document.measure_start_tree_stages):

            if options.to_measure < len(document.measure_start_tree_stages) - 1:
                to_stage = document.measure_start_tree_stages[
                    options.to_measure]  # take the barlines from the next coming measure
            else:
                to_stage = len(document.tree.stages) - 1  # all stages
        else:
            to_stage = len(document.tree.stages) - 1  # all stages

        if options.from_measure:
            # In case of beginning not from the first measure, we recover the spine creation and the headers
            # Traversed in reverse order to only include the active spines at the given measure...
            from_stage = document.measure_start_tree_stages[options.from_measure - 1]
            next_nodes = document.tree.stages[from_stage]
            while next_nodes and len(next_nodes) > 0 and next_nodes[0] != document.tree.root:
                row = []
                new_next_nodes = []
                non_place_holder_in_row = False
                spine_operation_row = False
                for node in next_nodes:
                    if isinstance(node.token, SpineOperationToken):
                        spine_operation_row = True
                        break

                for node in next_nodes:
                    content = ''
                    if isinstance(node.token, HeaderToken) and node.token.encoding in options.spine_types:
                        content = self.export_token(node.token, options)
                        non_place_holder_in_row = True
                    elif spine_operation_row:
                        # either if it is the split operator that has been cancelled, or the join one
                        if isinstance(node.token, SpineOperationToken) and (node.token.is_cancelled_at(
                                from_stage) or node.last_spine_operator_node and node.last_spine_operator_node.token.cancelled_at_stage == node.stage):
                            content = '*'
                        else:
                            content = self.export_token(node.token, options)
                            non_place_holder_in_row = True
                    if content:
                        row.append(content)
                    new_next_nodes.append(node.parent)
                next_nodes = new_next_nodes
                if non_place_holder_in_row:  # if the row contains just place holders due to an ommitted place holder, don't add it
                    rows.insert(0, row)

            # now, export the signatures
            node_signatures = None
            for node in document.tree.stages[from_stage]:
                node_signature_rows = []
                for signature_node in node.last_signature_nodes.nodes.values():
                    if not self.is_signature_cancelled(signature_node, node, from_stage, to_stage):
                        node_signature_rows.append(self.export_token(signature_node.token, options))
                if len(node_signature_rows) > 0:
                    if not node_signatures:
                        node_signatures = []  # an array for each spine
                    else:
                        if len(node_signatures[0]) != len(node_signature_rows):
                            raise Exception(f'Node signature mismatch: multiple spines with signatures at measure {len(rows)}')  # TODO better message
                    node_signatures.append(node_signature_rows)

            if node_signatures:
                for irow in range(len(node_signatures[0])):  # all spines have the same number of rows
                    row = []
                    for icol in range(len(node_signatures)):  #len(node_signatures) = number of spines
                        row.append(node_signatures[icol][irow])
                    rows.append(row)

        else:
            from_stage = 0
            rows = []

        #if not node.token.category == TokenCategory.LINE_COMMENTS and not node.token.category == TokenCategory.FIELD_COMMENTS:
        for stage in range(from_stage, to_stage + 1):  # to_stage included
            row = []
            for node in document.tree.stages[stage]:
                self.append_row(document=document, node=node, options=options, row=row)

            if len(row) > 0:
                rows.append(row)

        # now, add the spine terminate row
        if options.to_measure is not None and len(rows) > 0 and rows[len(rows) - 1][
            0] != '*-':  # if the terminate is not added yet
            spine_count = len(rows[len(rows) - 1])
            row = []
            for i in range(spine_count):
                row.append('*-')
            rows.append(row)

        result = ""
        for row in rows:
            if not empty_row(row):
                result += '\t'.join(row) + '\n'
        return result

    def compute_header_type(self, node) -> Optional[HeaderToken]:
        """
        Compute the header type of the node.

        Args:
            node (Node): The node to compute.

        Returns (Optional[Token]): The header type `Node`object. None if the current node is the header.

        """
        if isinstance(node.token, HeaderToken):
            header_type = node.token
        elif node.header_node:
            header_type = node.header_node.token
        else:
            header_type = None
        return header_type

    def export_token(self, token: Token, options: ExportOptions) -> str:
        if isinstance(token, HeaderToken):
            new_token = HeaderTokenGenerator.new(token=token, type=options.kern_type)
        else:
            new_token = token
        return (TokenizerFactory
                .create(options.kern_type.value, token_categories=options.token_categories)
                .tokenize(new_token))

    def append_row(self, document: Document, node, options: ExportOptions, row: list) -> bool:
        """
        Append a row to the row list if the node accomplishes the requirements.
        Args:
            document (Document): The document with the spines.
            node (Node): The node to append.
            options (ExportOptions): The export options to filter the token.
            row (list): The row to append.

        Returns (bool): True if the row was appended. False if the row was not appended.
        """
        header_type = self.compute_header_type(node)

        if (header_type is not None
                and header_type.encoding in options.spine_types
                and not node.token.hidden
                and (isinstance(node.token, ComplexToken) or node.token.category in options.token_categories)
                and (options.spine_ids is None or header_type.spine_id in options.spine_ids)
        # If None, all the spines will be exported. TODO: put all the spines as spine_ids = None
        ):
            row.append(self.export_token(node.token, options))
            return True

        return False

    def get_spine_types(self, document: Document, spine_types: list = None):
        """
        Get the spine types from the document.

        Args:
            document (Document): The document with the spines.
            spine_types (list): The spine types to export. If None, all the spine types will be exported.

        Returns: A list with the spine types.

        Examples:
            >>> exporter = Exporter()
            >>> exporter.get_spine_types(document)
            ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
            >>> exporter.get_spine_types(document, None)
            ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
            >>> exporter.get_spine_types(document, ['**kern'])
            ['**kern', '**kern', '**kern', '**kern']
            >>> exporter.get_spine_types(document, ['**kern', '**root'])
            ['**kern', '**kern', '**kern', '**kern', '**root']
            >>> exporter.get_spine_types(document, ['**kern', '**root', '**harm'])
            ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
            >>> exporter.get_spine_types(document, [])
            []
        """
        if spine_types is not None and len(spine_types) == 0:
            return []

        options = ExportOptions(spine_types=spine_types, token_categories=[TokenCategory.HEADER])
        content = self.export_string(document, options)

        # Remove all after the first line: **kern, **mens, etc... are always in the first row
        lines = content.split('\n')
        first_line = lines[0:1]
        tokens = first_line[0].split('\t')

        return tokens if tokens not in [[], ['']] else []


    @classmethod
    def export_options_validator(cls, document: Document, options: ExportOptions) -> None:
        """
        Validate the export options. Raise an exception if the options are invalid.

        Args:
            document: `Document` - The document to export.
            options: `ExportOptions` - The options to export the document.

        Returns: None

        Example:
            >>> export_options_validator(document, options)
            ValueError: option from_measure must be >=0 but -1 was found.
            >>> export_options_validator(document, options2)
            None
        """
        if options.from_measure is not None and options.from_measure < 0:
            raise ValueError(f'option from_measure must be >=0 but {options.from_measure} was found. ')
        if options.to_measure is not None and options.to_measure > len(document.measure_start_tree_stages):
            # "TODO: DAVID, check options.to_measure bounds. len(document.measure_start_tree_stages) or len(document.measure_start_tree_stages) - 1"
            raise ValueError(
                f'option to_measure must be <= {len(document.measure_start_tree_stages)} but {options.to_measure} was found. ')
        if options.to_measure is not None and options.from_measure is not None and options.to_measure < options.from_measure:
            raise ValueError(
                f'option to_measure must be >= from_measure but {options.to_measure} < {options.from_measure} was found. ')

    def is_signature_cancelled(self, signature_node, node, from_stage, to_stage) -> bool:
        if node.token.__class__ == signature_node.token.__class__:
            return True
        elif isinstance(node.token, NoteRestToken):
            return False
        elif from_stage < to_stage:
            for child in node.children:
                if self.is_signature_cancelled(signature_node, child, from_stage + 1, to_stage):
                    return True
            return False

append_row(document, node, options, row)

Append a row to the row list if the node accomplishes the requirements. Args: document (Document): The document with the spines. node (Node): The node to append. options (ExportOptions): The export options to filter the token. row (list): The row to append.

Returns (bool): True if the row was appended. False if the row was not appended.

Source code in kernpy/core/exporter.py
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
def append_row(self, document: Document, node, options: ExportOptions, row: list) -> bool:
    """
    Append a row to the row list if the node accomplishes the requirements.
    Args:
        document (Document): The document with the spines.
        node (Node): The node to append.
        options (ExportOptions): The export options to filter the token.
        row (list): The row to append.

    Returns (bool): True if the row was appended. False if the row was not appended.
    """
    header_type = self.compute_header_type(node)

    if (header_type is not None
            and header_type.encoding in options.spine_types
            and not node.token.hidden
            and (isinstance(node.token, ComplexToken) or node.token.category in options.token_categories)
            and (options.spine_ids is None or header_type.spine_id in options.spine_ids)
    # If None, all the spines will be exported. TODO: put all the spines as spine_ids = None
    ):
        row.append(self.export_token(node.token, options))
        return True

    return False

compute_header_type(node)

Compute the header type of the node.

Parameters:

Name Type Description Default
node Node

The node to compute.

required

Returns (Optional[Token]): The header type Nodeobject. None if the current node is the header.

Source code in kernpy/core/exporter.py
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def compute_header_type(self, node) -> Optional[HeaderToken]:
    """
    Compute the header type of the node.

    Args:
        node (Node): The node to compute.

    Returns (Optional[Token]): The header type `Node`object. None if the current node is the header.

    """
    if isinstance(node.token, HeaderToken):
        header_type = node.token
    elif node.header_node:
        header_type = node.header_node.token
    else:
        header_type = None
    return header_type

export_options_validator(document, options) classmethod

Validate the export options. Raise an exception if the options are invalid.

Parameters:

Name Type Description Default
document Document

Document - The document to export.

required
options ExportOptions

ExportOptions - The options to export the document.

required

Returns: None

Example

export_options_validator(document, options) ValueError: option from_measure must be >=0 but -1 was found. export_options_validator(document, options2) None

Source code in kernpy/core/exporter.py
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
@classmethod
def export_options_validator(cls, document: Document, options: ExportOptions) -> None:
    """
    Validate the export options. Raise an exception if the options are invalid.

    Args:
        document: `Document` - The document to export.
        options: `ExportOptions` - The options to export the document.

    Returns: None

    Example:
        >>> export_options_validator(document, options)
        ValueError: option from_measure must be >=0 but -1 was found.
        >>> export_options_validator(document, options2)
        None
    """
    if options.from_measure is not None and options.from_measure < 0:
        raise ValueError(f'option from_measure must be >=0 but {options.from_measure} was found. ')
    if options.to_measure is not None and options.to_measure > len(document.measure_start_tree_stages):
        # "TODO: DAVID, check options.to_measure bounds. len(document.measure_start_tree_stages) or len(document.measure_start_tree_stages) - 1"
        raise ValueError(
            f'option to_measure must be <= {len(document.measure_start_tree_stages)} but {options.to_measure} was found. ')
    if options.to_measure is not None and options.from_measure is not None and options.to_measure < options.from_measure:
        raise ValueError(
            f'option to_measure must be >= from_measure but {options.to_measure} < {options.from_measure} was found. ')

get_spine_types(document, spine_types=None)

Get the spine types from the document.

Parameters:

Name Type Description Default
document Document

The document with the spines.

required
spine_types list

The spine types to export. If None, all the spine types will be exported.

None

Returns: A list with the spine types.

Examples:

>>> exporter = Exporter()
>>> exporter.get_spine_types(document)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> exporter.get_spine_types(document, None)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> exporter.get_spine_types(document, ['**kern'])
['**kern', '**kern', '**kern', '**kern']
>>> exporter.get_spine_types(document, ['**kern', '**root'])
['**kern', '**kern', '**kern', '**kern', '**root']
>>> exporter.get_spine_types(document, ['**kern', '**root', '**harm'])
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> exporter.get_spine_types(document, [])
[]
Source code in kernpy/core/exporter.py
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
def get_spine_types(self, document: Document, spine_types: list = None):
    """
    Get the spine types from the document.

    Args:
        document (Document): The document with the spines.
        spine_types (list): The spine types to export. If None, all the spine types will be exported.

    Returns: A list with the spine types.

    Examples:
        >>> exporter = Exporter()
        >>> exporter.get_spine_types(document)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> exporter.get_spine_types(document, None)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> exporter.get_spine_types(document, ['**kern'])
        ['**kern', '**kern', '**kern', '**kern']
        >>> exporter.get_spine_types(document, ['**kern', '**root'])
        ['**kern', '**kern', '**kern', '**kern', '**root']
        >>> exporter.get_spine_types(document, ['**kern', '**root', '**harm'])
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> exporter.get_spine_types(document, [])
        []
    """
    if spine_types is not None and len(spine_types) == 0:
        return []

    options = ExportOptions(spine_types=spine_types, token_categories=[TokenCategory.HEADER])
    content = self.export_string(document, options)

    # Remove all after the first line: **kern, **mens, etc... are always in the first row
    lines = content.split('\n')
    first_line = lines[0:1]
    tokens = first_line[0].split('\t')

    return tokens if tokens not in [[], ['']] else []

F3Clef

Bases: Clef

Source code in kernpy/core/gkern.py
365
366
367
368
369
370
371
372
373
374
375
376
class F3Clef(Clef):
    def __init__(self):
        """
        Initializes the F Clef object.
        """
        super().__init__(DiatonicPitch('F'), 3)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('B', 3)

__init__()

Initializes the F Clef object.

Source code in kernpy/core/gkern.py
366
367
368
369
370
def __init__(self):
    """
    Initializes the F Clef object.
    """
    super().__init__(DiatonicPitch('F'), 3)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
372
373
374
375
376
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('B', 3)

F4Clef

Bases: Clef

Source code in kernpy/core/gkern.py
378
379
380
381
382
383
384
385
386
387
388
389
class F4Clef(Clef):
    def __init__(self):
        """
        Initializes the F Clef object.
        """
        super().__init__(DiatonicPitch('F'), 4)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('G', 2)

__init__()

Initializes the F Clef object.

Source code in kernpy/core/gkern.py
379
380
381
382
383
def __init__(self):
    """
    Initializes the F Clef object.
    """
    super().__init__(DiatonicPitch('F'), 4)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
385
386
387
388
389
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('G', 2)

FingSpineImporter

Bases: SpineImporter

Source code in kernpy/core/fing_spine_importer.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
class FingSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()


    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.FINGERING)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if not any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.FINGERING)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/fing_spine_importer.py
11
12
13
14
15
16
17
18
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

GClef

Bases: Clef

Source code in kernpy/core/gkern.py
352
353
354
355
356
357
358
359
360
361
362
363
class GClef(Clef):
    def __init__(self):
        """
        Initializes the G Clef object.
        """
        super().__init__(DiatonicPitch('G'), 2)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('E', 4)

__init__()

Initializes the G Clef object.

Source code in kernpy/core/gkern.py
353
354
355
356
357
def __init__(self):
    """
    Initializes the G Clef object.
    """
    super().__init__(DiatonicPitch('G'), 2)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
359
360
361
362
363
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('E', 4)

GraphvizExporter

Source code in kernpy/core/graphviz_exporter.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
class GraphvizExporter:
    def export_token(self, token: Token):
        if token is None or token.encoding is None:
            return ''
        else:
            return token.encoding.replace('\"', '\\"').replace('\\', '\\\\')

    @staticmethod
    def node_id(node: Node):
        return f"node{id(node)}"

    def export_to_dot(self, tree: MultistageTree, filename: Path = None):
        """
        Export the given MultistageTree to DOT format.

        Args:
            tree (MultistageTree): The tree to export.
            filename (Path or None): The output file path. If None, prints to stdout.
        """
        file = sys.stdout if filename is None else open(filename, 'w')

        try:
            file.write('digraph G {\n')
            file.write('    node [shape=record];\n')
            file.write('    rankdir=TB;\n')  # Ensure top-to-bottom layout

            # Create subgraphs for each stage
            for stage_index, stage in enumerate(tree.stages):
                if stage:
                    file.write('  {rank=same; ')
                    for node in stage:
                        file.write(f'"{self.node_id(node)}"; ')
                    file.write('}\n')

            # Write nodes and their connections
            self._write_nodes_iterative(tree.root, file)
            self._write_edges_iterative(tree.root, file)

            file.write('}\n')

        finally:
            if filename is not None:
                file.close()  # Close only if we explicitly opened a file

    def _write_nodes_iterative(self, root, file):
        stack = [root]

        while stack:
            node = stack.pop()
            header_label = f'header #{node.header_node.id}' if node.header_node else ''
            last_spine_operator_label = f'last spine op. #{node.last_spine_operator_node.id}' if node.last_spine_operator_node else ''
            category_name = getattr(getattr(getattr(node, "token", None), "category", None), "_name_", "Non defined category")


            top_record_label = f'{{ #{node.id}| stage {node.stage} | {header_label} | {last_spine_operator_label} | {category_name} }}'
            signatures_label = ''
            if node.last_signature_nodes and node.last_signature_nodes.nodes:
                for k, v in node.last_signature_nodes.nodes.items():
                    if signatures_label:
                        signatures_label += '|'
                    signatures_label += f'{k} #{v.id}'

            if isinstance(node.token, SpineOperationToken) and node.token.cancelled_at_stage:
                signatures_label += f'| {{ cancelled at stage {node.token.cancelled_at_stage} }}'

            file.write(f'  "{self.node_id(node)}" [label="{{ {top_record_label} | {signatures_label} | {self.export_token(node.token)} }}"];\n')

            # Add children to the stack to be processed
            for child in reversed(node.children):
                stack.append(child)

    def _write_edges_iterative(self, root, file):
        stack = [root]

        while stack:
            node = stack.pop()
            for child in node.children:
                file.write(f'  "{self.node_id(node)}" -> "{self.node_id(child)}";\n')
                stack.append(child)

export_to_dot(tree, filename=None)

Export the given MultistageTree to DOT format.

Parameters:

Name Type Description Default
tree MultistageTree

The tree to export.

required
filename Path or None

The output file path. If None, prints to stdout.

None
Source code in kernpy/core/graphviz_exporter.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
def export_to_dot(self, tree: MultistageTree, filename: Path = None):
    """
    Export the given MultistageTree to DOT format.

    Args:
        tree (MultistageTree): The tree to export.
        filename (Path or None): The output file path. If None, prints to stdout.
    """
    file = sys.stdout if filename is None else open(filename, 'w')

    try:
        file.write('digraph G {\n')
        file.write('    node [shape=record];\n')
        file.write('    rankdir=TB;\n')  # Ensure top-to-bottom layout

        # Create subgraphs for each stage
        for stage_index, stage in enumerate(tree.stages):
            if stage:
                file.write('  {rank=same; ')
                for node in stage:
                    file.write(f'"{self.node_id(node)}"; ')
                file.write('}\n')

        # Write nodes and their connections
        self._write_nodes_iterative(tree.root, file)
        self._write_edges_iterative(tree.root, file)

        file.write('}\n')

    finally:
        if filename is not None:
            file.close()  # Close only if we explicitly opened a file

HarmSpineImporter

Bases: SpineImporter

Source code in kernpy/core/harm_spine_importer.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class HarmSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.HARMONY)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if not any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.HARMONY)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/harm_spine_importer.py
11
12
13
14
15
16
17
18
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

HeaderToken

Bases: SimpleToken

HeaderTokens class.

Source code in kernpy/core/tokens.py
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
class HeaderToken(SimpleToken):
    """
    HeaderTokens class.
    """

    def __init__(self, encoding, spine_id: int):
        """
        Constructor for the HeaderToken class.

        Args:
            encoding (str): The original representation of the token.
            spine_id (int): The spine id of the token. The spine id is used to identify the token in the score.\
                The spine_id starts from 0 and increases by 1 for each new spine like the following example:
                **kern  **kern  **kern **dyn **text
                0   1   2   3   4
        """
        super().__init__(encoding, TokenCategory.HEADER)
        self.spine_id = spine_id

    def export(self, **kwargs) -> str:
        return self.encoding

__init__(encoding, spine_id)

Constructor for the HeaderToken class.

Parameters:

Name Type Description Default
encoding str

The original representation of the token.

required
spine_id int

The spine id of the token. The spine id is used to identify the token in the score. The spine_id starts from 0 and increases by 1 for each new spine like the following example: kern kern kern dyn **text 0 1 2 3 4

required
Source code in kernpy/core/tokens.py
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
def __init__(self, encoding, spine_id: int):
    """
    Constructor for the HeaderToken class.

    Args:
        encoding (str): The original representation of the token.
        spine_id (int): The spine id of the token. The spine id is used to identify the token in the score.\
            The spine_id starts from 0 and increases by 1 for each new spine like the following example:
            **kern  **kern  **kern **dyn **text
            0   1   2   3   4
    """
    super().__init__(encoding, TokenCategory.HEADER)
    self.spine_id = spine_id

HeaderTokenGenerator

HeaderTokenGenerator class.

This class is used to translate the HeaderTokens to the specific encoding format.

Source code in kernpy/core/exporter.py
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
class HeaderTokenGenerator:
    """
    HeaderTokenGenerator class.

    This class is used to translate the HeaderTokens to the specific encoding format.
    """
    @classmethod
    def new(cls, *, token: HeaderToken, type: Encoding):
        """
        Create a new HeaderTokenGenerator object. Only accepts stardized Humdrum **kern encodings. 

        Args:
            token (HeaderToken): The HeaderToken to be translated.
            type (Encoding): The encoding to be used.

        Examples:
            >>> header = HeaderToken('**kern', 0)
            >>> header.encoding
            '**kern'
            >>> new_header = HeaderTokenGenerator.new(token=header, type=Encoding.eKern)
            >>> new_header.encoding
            '**ekern'
        """
        new_encoding = f'**{type.prefix()}{token.encoding[2:]}'
        new_token = HeaderToken(new_encoding, token.spine_id)

        return new_token

new(*, token, type) classmethod

Create a new HeaderTokenGenerator object. Only accepts stardized Humdrum **kern encodings.

Parameters:

Name Type Description Default
token HeaderToken

The HeaderToken to be translated.

required
type Encoding

The encoding to be used.

required

Examples:

>>> header = HeaderToken('**kern', 0)
>>> header.encoding
'**kern'
>>> new_header = HeaderTokenGenerator.new(token=header, type=Encoding.eKern)
>>> new_header.encoding
'**ekern'
Source code in kernpy/core/exporter.py
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
@classmethod
def new(cls, *, token: HeaderToken, type: Encoding):
    """
    Create a new HeaderTokenGenerator object. Only accepts stardized Humdrum **kern encodings. 

    Args:
        token (HeaderToken): The HeaderToken to be translated.
        type (Encoding): The encoding to be used.

    Examples:
        >>> header = HeaderToken('**kern', 0)
        >>> header.encoding
        '**kern'
        >>> new_header = HeaderTokenGenerator.new(token=header, type=Encoding.eKern)
        >>> new_header.encoding
        '**ekern'
    """
    new_encoding = f'**{type.prefix()}{token.encoding[2:]}'
    new_token = HeaderToken(new_encoding, token.spine_id)

    return new_token

HumdrumPitchImporter

Bases: PitchImporter

Represents the pitch in the Humdrum Kern format.

The name is represented using the International Standard Organization (ISO) name notation. The first line below the staff is the C4 in G clef. The above C is C5, the below C is C3, etc.

The Humdrum Kern format uses the following name representation: 'c' = C4 'cc' = C5 'ccc' = C6 'cccc' = C7

'C' = C3 'CC' = C2 'CCC' = C1

This class do not limit the name ranges.

In the following example, the name is represented by the letter 'c'. The name of 'c' is C4, 'cc' is C5, 'ccc' is C6.

**kern
*clefG2
2c          // C4
2cc         // C5
2ccc        // C6
2C          // C3
2CC         // C2
2CCC        // C1
*-
Source code in kernpy/core/pitch_models.py
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
class HumdrumPitchImporter(PitchImporter):
    """
    Represents the pitch in the Humdrum Kern format.

    The name is represented using the International Standard Organization (ISO) name notation.
    The first line below the staff is the C4 in G clef. The above C is C5, the below C is C3, etc.

    The Humdrum Kern format uses the following name representation:
    'c' = C4
    'cc' = C5
    'ccc' = C6
    'cccc' = C7

    'C' = C3
    'CC' = C2
    'CCC' = C1

    This class do not limit the name ranges.

    In the following example, the name is represented by the letter 'c'. The name of 'c' is C4, 'cc' is C5, 'ccc' is C6.
    ```
    **kern
    *clefG2
    2c          // C4
    2cc         // C5
    2ccc        // C6
    2C          // C3
    2CC         // C2
    2CCC        // C1
    *-
    ```
    """
    C4_PITCH_LOWERCASE = 'c'
    C4_OCATAVE = 4
    C3_PITCH_UPPERCASE = 'C'
    C3_OCATAVE = 3
    VALID_PITCHES = 'abcdefg' + 'ABCDEFG'

    def __init__(self):
        super().__init__()

    def import_pitch(self, encoding: str) -> AgnosticPitch:
        self.name, self.octave = self._parse_pitch(encoding)
        return AgnosticPitch(self.name, self.octave)

    def _parse_pitch(self, encoding: str) -> tuple:
        accidentals = ''.join([c for c in encoding if c in ['#', '-']])
        accidentals = accidentals.replace('#', '+')
        encoding = encoding.replace('#', '').replace('-', '')
        pitch = encoding[0].lower()
        octave = None
        if encoding[0].islower():
            min_octave = HumdrumPitchImporter.C4_OCATAVE
            octave = min_octave + (len(encoding) - 1)
        elif encoding[0].isupper():
            max_octave = HumdrumPitchImporter.C3_OCATAVE
            octave = max_octave - (len(encoding) - 1)
        name = f"{pitch}{accidentals}"
        return name, octave

Importer

Importer class.

Use this class to import the content from a file or a string to a Document object.

Source code in kernpy/core/importer.py
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
class Importer:
    """
    Importer class.

    Use this class to import the content from a file or a string to a `Document` object.
    """
    def __init__(self):
        """
        Create an instance of the importer.

        Raises:
            Exception: If the importer content is not a valid **kern file.

        Examples:
            # Create the importer
            >>> importer = Importer()

            # Import the content from a file
            >>> document = importer.import_file('file.krn')

            # Import the content from a string
            >>> document = importer.import_string("**kern\n*clefF4\nc4\n4d\n4e\n4f\n*-")
        """
        self.last_measure_number = None
        self.last_bounding_box = None
        self.errors = []

        self._tree = MultistageTree()
        self._document = Document(self._tree)
        self._importers = {}
        self._header_row_number = None
        self._row_number = 1
        self._tree_stage = 0
        self._next_stage_parents = None
        self._prev_stage_parents = None
        self._last_node_previous_to_header = self._tree.root

    @staticmethod
    def get_last_spine_operator(parent):
        if parent is None:
            return None
        elif isinstance(parent.token, SpineOperationToken):
            return parent
        else:
            return parent.last_spine_operator_node

    #TODO Documentar cómo propagamos los header_node y last_spine_operator_node...
    def run(self, reader) -> Document:
        for row in reader:
            if len(row) <= 0:
                # Found an empty row, usually the last one. Ignore it.
                continue

            self._tree_stage = self._tree_stage + 1
            is_barline = False
            if self._next_stage_parents:
                self._prev_stage_parents = copy(self._next_stage_parents)
            self._next_stage_parents = []

            if row[0].startswith("!!"):
                self._compute_metacomment_token(row[0].strip())
            else:
                for icolumn, column in enumerate(row):
                    if column.startswith("**"):
                        self._compute_header_token(icolumn, column)
                        # go to next row
                        continue

                    if column in SPINE_OPERATIONS:
                        self._compute_spine_operator_token(icolumn, column, row)
                    else:  # column is not a spine operation
                        if column.startswith("!"):
                            token = FieldCommentToken(column)
                        else:
                            if self._prev_stage_parents is None:
                                raise ValueError(f'Any spine header found in the column #{icolumn}. '
                                                 f'Expected a previous line with valid content. '
                                                 f'The token in column #{icolumn} and row #{self._row_number - 1}'
                                                 f' was not created correctly. Error detected in '
                                                 f'column #{icolumn} in row #{self._row_number}. '
                                                 f'Found {column}. ')
                            if icolumn >= len(self._prev_stage_parents):
                                # TODO: Try to fix the kern in runtime. Add options to public API
                                # continue  # ignore the column
                                raise ValueError(f'Wrong columns number in row {self._row_number}. '
                                                 f'The token in column #{icolumn} and row #{self._row_number}'
                                                 f' has more columns than expected in its row. '
                                                 f'Expected {len(self._prev_stage_parents)} columns '
                                                 f'but found {len(row)}.')
                            parent = self._prev_stage_parents[icolumn]
                            if not parent:
                                raise Exception(f'Cannot find a parent node for column #{icolumn} in row {self._row_number}')
                            if not parent.header_node:
                                raise Exception(f'Cannot find a header node for column #{icolumn} in row {self._row_number}')
                            importer = self._importers.get(parent.header_node.token.encoding)
                            if not importer:
                                raise Exception(f'Cannot find an importer for header {parent.header_node.token.encoding}')
                            try:
                                token = importer.import_token(column)
                            except Exception as error:
                                token = ErrorToken(column, self._row_number, str(error))
                                self.errors.append(token)
                        if not token:
                            raise Exception(
                                f'No token generated for input {column} in row number #{self._row_number} using importer {importer}')

                        parent = self._prev_stage_parents[icolumn]
                        node = self._tree.add_node(self._tree_stage, parent, token, self.get_last_spine_operator(parent), parent.last_signature_nodes, parent.header_node)
                        self._next_stage_parents.append(node)

                        if (token.category == TokenCategory.BARLINES
                                or TokenCategory.is_child(child=token.category, parent=TokenCategory.CORE)
                                    and len(self._document.measure_start_tree_stages) == 0):
                            is_barline = True
                        elif isinstance(token, BoundingBoxToken):
                            self.handle_bounding_box(self._document, token)
                        elif isinstance(token, SignatureToken):
                            node.last_signature_nodes.update(node)

                if is_barline:
                    self._document.measure_start_tree_stages.append(self._tree_stage)
                    self.last_measure_number = len(self._document.measure_start_tree_stages)
                    if self.last_bounding_box:
                        self.last_bounding_box.to_measure = self.last_measure_number
            self._row_number = self._row_number + 1
        return self._document

    def handle_bounding_box(self, document: Document, token: BoundingBoxToken):
        page_number = token.page_number
        last_page_bb = document.page_bounding_boxes.get(page_number)
        if last_page_bb is None:
            if self.last_measure_number is None:
                self.last_measure_number = 0
            self.last_bounding_box = BoundingBoxMeasures(token.bounding_box, self.last_measure_number,
                                                         self.last_measure_number)
            document.page_bounding_boxes[page_number] = self.last_bounding_box
        else:
            last_page_bb.bounding_box.extend(token.bounding_box)
            last_page_bb.to_measure = self.last_measure_number

    def import_file(self, file_path: Path) -> Document:
        """
        Import the content from the importer to the file.
        Args:
            file_path: The path to the file.

        Returns:
            Document - The document with the imported content.

        Examples:
            # Create the importer and read the file
            >>> importer = Importer()
            >>> importer.import_file('file.krn')
        """
        with open(file_path, 'r', newline='', encoding='utf-8', errors='ignore') as file:
            reader = csv.reader(file, delimiter='\t')
            return self.run(reader)

    def import_string(self, text: str) -> Document:
        """
        Import the content from the content of the score in string format.

        Args:
            text: The content of the score in string format.

        Returns:
            Document - The document with the imported content.

        Examples:
            # Create the importer and read the file
            >>> importer = Importer()
            >>> importer.import_string("**kern\n*clefF4\nc4\n4d\n4e\n4f\n*-")
            # Read the content from a file
            >>> with open('file.krn',  'r', newline='', encoding='utf-8', errors='ignore') as f: # We encourage you to use these open file options
            >>>     content = f.read()
            >>> importer.import_string(content)
            >>> document = importer.import_string(content)
        """
        lines = text.splitlines()
        reader = csv.reader(lines, delimiter='\t')
        return self.run(reader)

    def get_error_messages(self) -> str:
        """
        Get the error messages of the importer.

        Returns: str - The error messages split by a new line character.

        Examples:
            # Create the importer and read the file
            >>> importer = Importer()
            >>> importer.import_file(Path('file.krn'))
            >>> print(importer.get_error_messages())
            'Error: Invalid token in row 1'
        """
        result = ''
        for err in self.errors:
            result += str(err)
            result += '\n'
        return result

    def has_errors(self) -> bool:
        """
        Check if the importer has any errors.

        Returns: bool - True if the importer has errors, False otherwise.

        Examples:
            # Create the importer and read the file
            >>> importer = Importer()
            >>> importer.import_file(Path('file.krn'))    # file.krn has an error
            >>> print(importer.has_errors())
            True
            >>> importer.import_file(Path('file2.krn'))   # file2.krn has no errors
            >>> print(importer.has_errors())
            False
        """
        return len(self.errors) > 0

    def _compute_metacomment_token(self, raw_token: str):
        token = MetacommentToken(raw_token)
        if self._header_row_number is None:
            node = self._tree.add_node(self._tree_stage, self._last_node_previous_to_header, token, None, None, None)
            self._last_node_previous_to_header = node
        else:
            for parent in self._prev_stage_parents:
                node = self._tree.add_node(self._tree_stage, parent, token, self.get_last_spine_operator(parent), parent.last_signature_nodes, parent.header_node) # the same reference for all spines - TODO Recordar documentarlo
                self._next_stage_parents.append(node)

    def _compute_header_token(self, column_index: int, column_content: str):
        if self._header_row_number is not None and self._header_row_number != self._row_number:
            raise Exception(
                f"Several header rows not supported, there is a header row in #{self._header_row_number} and another in #{self._row_number} ")

            # it's a spine header
        self._document.header_stage = self._tree_stage
        importer = self._importers.get(column_content)
        if not importer:
            importer = createImporter(column_content)
            self._importers[column_content] = importer

        token = HeaderToken(column_content, spine_id=column_index)
        node = self._tree.add_node(self._tree_stage, self._last_node_previous_to_header, token, None, None)
        node.header_node = node # this value will be propagated
        self._next_stage_parents.append(node)

    def _compute_spine_operator_token(self, column_index: int, column_content: str, row: List[str]):
        token = SpineOperationToken(column_content)

        if column_index >= len(self._prev_stage_parents):
            raise Exception(f'Expected at least {column_index+1} parents in row {self._row_number}, but found {len(self._prev_stage_parents)}: {row}')

        parent = self._prev_stage_parents[column_index]
        node = self._tree.add_node(self._tree_stage, parent, token, self.get_last_spine_operator(parent), parent.last_signature_nodes, parent.header_node)

        if column_content == '*-':
            if node.last_spine_operator_node is not None:
                node.last_spine_operator_node.token.cancelled_at_stage = self._tree_stage
            pass # it's terminated, no continuation
        elif column_content == "*+" or column_content == "*^":
            self._next_stage_parents.append(node)
            self._next_stage_parents.append(node) # twice, the next stage two children will have this one as parent
        elif column_content == "*v":
            if node.last_spine_operator_node is not None:
                node.last_spine_operator_node.token.cancelled_at_stage = self._tree_stage

            if column_index == 0 or row[column_index-1] != '*v' or self._prev_stage_parents[column_index-1].header_node != self._prev_stage_parents[column_index].header_node: # don't collapse two different spines
                self._next_stage_parents.append(node) # just one spine each two
        else:
            raise Exception(f'Unknown spine operation in column #{column_content} and row #{self._row_number}')

__init__()

    Create an instance of the importer.

    Raises:
        Exception: If the importer content is not a valid **kern file.

    Examples:
        # Create the importer
        >>> importer = Importer()

        # Import the content from a file
        >>> document = importer.import_file('file.krn')

        # Import the content from a string
        >>> document = importer.import_string("**kern

clefF4 c4 4d 4e 4f -")

Source code in kernpy/core/importer.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def __init__(self):
    """
    Create an instance of the importer.

    Raises:
        Exception: If the importer content is not a valid **kern file.

    Examples:
        # Create the importer
        >>> importer = Importer()

        # Import the content from a file
        >>> document = importer.import_file('file.krn')

        # Import the content from a string
        >>> document = importer.import_string("**kern\n*clefF4\nc4\n4d\n4e\n4f\n*-")
    """
    self.last_measure_number = None
    self.last_bounding_box = None
    self.errors = []

    self._tree = MultistageTree()
    self._document = Document(self._tree)
    self._importers = {}
    self._header_row_number = None
    self._row_number = 1
    self._tree_stage = 0
    self._next_stage_parents = None
    self._prev_stage_parents = None
    self._last_node_previous_to_header = self._tree.root

get_error_messages()

Get the error messages of the importer.

Returns: str - The error messages split by a new line character.

Examples:

Create the importer and read the file

>>> importer = Importer()
>>> importer.import_file(Path('file.krn'))
>>> print(importer.get_error_messages())
'Error: Invalid token in row 1'
Source code in kernpy/core/importer.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
def get_error_messages(self) -> str:
    """
    Get the error messages of the importer.

    Returns: str - The error messages split by a new line character.

    Examples:
        # Create the importer and read the file
        >>> importer = Importer()
        >>> importer.import_file(Path('file.krn'))
        >>> print(importer.get_error_messages())
        'Error: Invalid token in row 1'
    """
    result = ''
    for err in self.errors:
        result += str(err)
        result += '\n'
    return result

has_errors()

Check if the importer has any errors.

Returns: bool - True if the importer has errors, False otherwise.

Examples:

Create the importer and read the file

>>> importer = Importer()
>>> importer.import_file(Path('file.krn'))    # file.krn has an error
>>> print(importer.has_errors())
True
>>> importer.import_file(Path('file2.krn'))   # file2.krn has no errors
>>> print(importer.has_errors())
False
Source code in kernpy/core/importer.py
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
def has_errors(self) -> bool:
    """
    Check if the importer has any errors.

    Returns: bool - True if the importer has errors, False otherwise.

    Examples:
        # Create the importer and read the file
        >>> importer = Importer()
        >>> importer.import_file(Path('file.krn'))    # file.krn has an error
        >>> print(importer.has_errors())
        True
        >>> importer.import_file(Path('file2.krn'))   # file2.krn has no errors
        >>> print(importer.has_errors())
        False
    """
    return len(self.errors) > 0

import_file(file_path)

Import the content from the importer to the file. Args: file_path: The path to the file.

Returns:

Type Description
Document

Document - The document with the imported content.

Examples:

Create the importer and read the file

>>> importer = Importer()
>>> importer.import_file('file.krn')
Source code in kernpy/core/importer.py
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
def import_file(self, file_path: Path) -> Document:
    """
    Import the content from the importer to the file.
    Args:
        file_path: The path to the file.

    Returns:
        Document - The document with the imported content.

    Examples:
        # Create the importer and read the file
        >>> importer = Importer()
        >>> importer.import_file('file.krn')
    """
    with open(file_path, 'r', newline='', encoding='utf-8', errors='ignore') as file:
        reader = csv.reader(file, delimiter='\t')
        return self.run(reader)

import_string(text)

    Import the content from the content of the score in string format.

    Args:
        text: The content of the score in string format.

    Returns:
        Document - The document with the imported content.

    Examples:
        # Create the importer and read the file
        >>> importer = Importer()
        >>> importer.import_string("**kern

clefF4 c4 4d 4e 4f -") # Read the content from a file >>> with open('file.krn', 'r', newline='', encoding='utf-8', errors='ignore') as f: # We encourage you to use these open file options >>> content = f.read() >>> importer.import_string(content) >>> document = importer.import_string(content)

Source code in kernpy/core/importer.py
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
def import_string(self, text: str) -> Document:
    """
    Import the content from the content of the score in string format.

    Args:
        text: The content of the score in string format.

    Returns:
        Document - The document with the imported content.

    Examples:
        # Create the importer and read the file
        >>> importer = Importer()
        >>> importer.import_string("**kern\n*clefF4\nc4\n4d\n4e\n4f\n*-")
        # Read the content from a file
        >>> with open('file.krn',  'r', newline='', encoding='utf-8', errors='ignore') as f: # We encourage you to use these open file options
        >>>     content = f.read()
        >>> importer.import_string(content)
        >>> document = importer.import_string(content)
    """
    lines = text.splitlines()
    reader = csv.reader(lines, delimiter='\t')
    return self.run(reader)

KernSpineImporter

Bases: SpineImporter

Source code in kernpy/core/kern_spine_importer.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
class KernSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()

    def import_token(self, encoding: str):
        self._raise_error_if_wrong_input(encoding)

        # self.listenerImporter = KernListenerImporter(token) # TODO ¿Por qué no va esto?
        # self.listenerImporter.start()
        lexer = kernSpineLexer(InputStream(encoding))
        lexer.removeErrorListeners()
        lexer.addErrorListener(self.error_listener)
        stream = CommonTokenStream(lexer)
        parser = kernSpineParser(stream)
        parser._interp.predictionMode = PredictionMode.SLL  # it improves a lot the parsing
        parser.removeErrorListeners()
        parser.addErrorListener(self.error_listener)
        parser.errHandler = BailErrorStrategy()
        tree = parser.start()
        walker = ParseTreeWalker()
        listener = KernSpineListener()
        walker.walk(listener, tree)
        if self.error_listener.getNumberErrorsFound() > 0:
            raise Exception(self.error_listener.errors)
        return listener.token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/kern_spine_importer.py
41
42
43
44
45
46
47
48
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

KernTokenizer

Bases: Tokenizer

KernTokenizer converts a Token into a normalized kern string representation.

Source code in kernpy/core/tokenizers.py
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
class KernTokenizer(Tokenizer):
    """
    KernTokenizer converts a Token into a normalized kern string representation.
    """
    def __init__(self, *, token_categories: Set['TokenCategory']):
        """
        Create a new KernTokenizer.

        Args:
            token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
        """
        super().__init__(token_categories=token_categories)

    def tokenize(self, token: Token) -> str:
        """
        Tokenize a token into a normalized kern string representation.
        This format is the classic Humdrum **kern representation.

        Args:
            token (Token): Token to be tokenized.

        Returns (str): Normalized kern string representation. This is the classic Humdrum **kern representation.

        Examples:
            >>> token.encoding
            '2@.@bb@-·_·L'
            >>> KernTokenizer().tokenize(token)
            '2.bb-_L'
        """
        return EkernTokenizer(token_categories=self.token_categories).tokenize(token).replace(TOKEN_SEPARATOR, '').replace(DECORATION_SEPARATOR, '')

__init__(*, token_categories)

Create a new KernTokenizer.

Parameters:

Name Type Description Default
token_categories Set[TokenCategory]

List of categories to be tokenized. If None will raise an exception.

required
Source code in kernpy/core/tokenizers.py
86
87
88
89
90
91
92
93
def __init__(self, *, token_categories: Set['TokenCategory']):
    """
    Create a new KernTokenizer.

    Args:
        token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
    """
    super().__init__(token_categories=token_categories)

tokenize(token)

Tokenize a token into a normalized kern string representation. This format is the classic Humdrum **kern representation.

Parameters:

Name Type Description Default
token Token

Token to be tokenized.

required

Returns (str): Normalized kern string representation. This is the classic Humdrum **kern representation.

Examples:

>>> token.encoding
'2@.@bb@-·_·L'
>>> KernTokenizer().tokenize(token)
'2.bb-_L'
Source code in kernpy/core/tokenizers.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
def tokenize(self, token: Token) -> str:
    """
    Tokenize a token into a normalized kern string representation.
    This format is the classic Humdrum **kern representation.

    Args:
        token (Token): Token to be tokenized.

    Returns (str): Normalized kern string representation. This is the classic Humdrum **kern representation.

    Examples:
        >>> token.encoding
        '2@.@bb@-·_·L'
        >>> KernTokenizer().tokenize(token)
        '2.bb-_L'
    """
    return EkernTokenizer(token_categories=self.token_categories).tokenize(token).replace(TOKEN_SEPARATOR, '').replace(DECORATION_SEPARATOR, '')

MensSpineImporter

Bases: SpineImporter

Source code in kernpy/core/mens_spine_importer.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class MensSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        MensSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        raise NotImplementedError()

    def import_token(self, encoding: str) -> Token:
        raise NotImplementedError()

__init__(verbose=False)

MensSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/mens_spine_importer.py
10
11
12
13
14
15
16
17
def __init__(self, verbose: Optional[bool] = False):
    """
    MensSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

MxhmSpineImporter

Bases: SpineImporter

Source code in kernpy/core/mhxm_spine_importer.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class MxhmSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.HARMONY)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.HARMONY)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/mhxm_spine_importer.py
11
12
13
14
15
16
17
18
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

NoteRestToken

Bases: ComplexToken

NoteRestToken class.

Attributes:

Name Type Description
pitch_duration_subtokens list

The subtokens for the pitch and duration

decoration_subtokens list

The subtokens for the decorations

Source code in kernpy/core/tokens.py
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
class NoteRestToken(ComplexToken):
    """
    NoteRestToken class.

    Attributes:
        pitch_duration_subtokens (list): The subtokens for the pitch and duration
        decoration_subtokens (list): The subtokens for the decorations
    """

    def __init__(
            self,
            encoding: str,
            pitch_duration_subtokens: List[Subtoken],
            decoration_subtokens: List[Subtoken]
    ):
        """
        NoteRestToken constructor.

        Args:
            encoding (str): The complete unprocessed encoding
            pitch_duration_subtokens (List[Subtoken])y: The subtokens for the pitch and duration
            decoration_subtokens (List[Subtoken]): The subtokens for the decorations. Individual elements of the token, of type Subtoken
        """
        super().__init__(encoding, TokenCategory.NOTE_REST)
        if not pitch_duration_subtokens or len(pitch_duration_subtokens) == 0:
            raise ValueError('Empty name-duration subtokens')

        for subtoken in pitch_duration_subtokens:
            if not isinstance(subtoken, Subtoken):
                raise ValueError(f'All pitch-duration subtokens must be instances of Subtoken. Found {type(subtoken)}')
        for subtoken in decoration_subtokens:
            if not isinstance(subtoken, Subtoken):
                raise ValueError(f'All decoration subtokens must be instances of Subtoken. Found {type(subtoken)}')

        self.pitch_duration_subtokens = pitch_duration_subtokens
        self.decoration_subtokens = decoration_subtokens

    def export(self, **kwargs) -> str:
        """
        Exports the token.

        Keyword Arguments:
            filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
                indicating whether the token should be included in the export. If provided, only tokens for which the
                function returns True will be exported. Defaults to None. If None, all tokens will be exported.

        Returns (str): The exported token.

        """
        filter_categories_fn = kwargs.get('filter_categories', None)

        # Filter subcategories
        pitch_duration_tokens = {
            subtoken for subtoken in self.pitch_duration_subtokens
            if filter_categories_fn is None or filter_categories_fn(subtoken.category)
        }
        decoration_tokens = {
            subtoken for subtoken in self.decoration_subtokens
            if filter_categories_fn is None or filter_categories_fn(subtoken.category)
        }
        pitch_duration_tokens_sorted = sorted(pitch_duration_tokens, key=lambda t:  (t.category.value, t.encoding))
        decoration_tokens_sorted     = sorted(decoration_tokens,     key=lambda t:  (t.category.value, t.encoding))

        # Join the sorted subtokens
        pitch_duration_part = TOKEN_SEPARATOR.join([subtoken.encoding for subtoken in pitch_duration_tokens_sorted])
        decoration_part = DECORATION_SEPARATOR.join([subtoken.encoding for subtoken in decoration_tokens_sorted])

        result = pitch_duration_part
        if len(decoration_part):
            result += DECORATION_SEPARATOR + decoration_part

        return result if len(result) > 0 else EMPTY_TOKEN

__init__(encoding, pitch_duration_subtokens, decoration_subtokens)

NoteRestToken constructor.

Parameters:

Name Type Description Default
encoding str

The complete unprocessed encoding

required
pitch_duration_subtokens List[Subtoken])y

The subtokens for the pitch and duration

required
decoration_subtokens List[Subtoken]

The subtokens for the decorations. Individual elements of the token, of type Subtoken

required
Source code in kernpy/core/tokens.py
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
def __init__(
        self,
        encoding: str,
        pitch_duration_subtokens: List[Subtoken],
        decoration_subtokens: List[Subtoken]
):
    """
    NoteRestToken constructor.

    Args:
        encoding (str): The complete unprocessed encoding
        pitch_duration_subtokens (List[Subtoken])y: The subtokens for the pitch and duration
        decoration_subtokens (List[Subtoken]): The subtokens for the decorations. Individual elements of the token, of type Subtoken
    """
    super().__init__(encoding, TokenCategory.NOTE_REST)
    if not pitch_duration_subtokens or len(pitch_duration_subtokens) == 0:
        raise ValueError('Empty name-duration subtokens')

    for subtoken in pitch_duration_subtokens:
        if not isinstance(subtoken, Subtoken):
            raise ValueError(f'All pitch-duration subtokens must be instances of Subtoken. Found {type(subtoken)}')
    for subtoken in decoration_subtokens:
        if not isinstance(subtoken, Subtoken):
            raise ValueError(f'All decoration subtokens must be instances of Subtoken. Found {type(subtoken)}')

    self.pitch_duration_subtokens = pitch_duration_subtokens
    self.decoration_subtokens = decoration_subtokens

export(**kwargs)

Exports the token.

Other Parameters:

Name Type Description
filter_categories Optional[Callable[[TokenCategory], bool]]

A function that takes a TokenCategory and returns a boolean indicating whether the token should be included in the export. If provided, only tokens for which the function returns True will be exported. Defaults to None. If None, all tokens will be exported.

Returns (str): The exported token.

Source code in kernpy/core/tokens.py
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
def export(self, **kwargs) -> str:
    """
    Exports the token.

    Keyword Arguments:
        filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
            indicating whether the token should be included in the export. If provided, only tokens for which the
            function returns True will be exported. Defaults to None. If None, all tokens will be exported.

    Returns (str): The exported token.

    """
    filter_categories_fn = kwargs.get('filter_categories', None)

    # Filter subcategories
    pitch_duration_tokens = {
        subtoken for subtoken in self.pitch_duration_subtokens
        if filter_categories_fn is None or filter_categories_fn(subtoken.category)
    }
    decoration_tokens = {
        subtoken for subtoken in self.decoration_subtokens
        if filter_categories_fn is None or filter_categories_fn(subtoken.category)
    }
    pitch_duration_tokens_sorted = sorted(pitch_duration_tokens, key=lambda t:  (t.category.value, t.encoding))
    decoration_tokens_sorted     = sorted(decoration_tokens,     key=lambda t:  (t.category.value, t.encoding))

    # Join the sorted subtokens
    pitch_duration_part = TOKEN_SEPARATOR.join([subtoken.encoding for subtoken in pitch_duration_tokens_sorted])
    decoration_part = DECORATION_SEPARATOR.join([subtoken.encoding for subtoken in decoration_tokens_sorted])

    result = pitch_duration_part
    if len(decoration_part):
        result += DECORATION_SEPARATOR + decoration_part

    return result if len(result) > 0 else EMPTY_TOKEN

PitchPositionReferenceSystem

Source code in kernpy/core/gkern.py
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
class PitchPositionReferenceSystem:
    def __init__(self, base_pitch: AgnosticPitch):
        """
        Initializes the PitchPositionReferenceSystem object.
        Args:
            base_pitch (AgnosticPitch): The AgnosticPitch in the first line of the Staff. \
             The AgnosticPitch object that serves as the reference point for the system.
        """
        self.base_pitch = base_pitch

    def compute_position(self, pitch: AgnosticPitch) -> PositionInStaff:
        """
        Computes the position in staff for the given pitch.
        Args:
            pitch (AgnosticPitch): The AgnosticPitch object to compute the position for.
        Returns:
            PositionInStaff: The PositionInStaff object representing the computed position.
        """
        # map A–G to 0–6
        LETTER_TO_INDEX = {'C': 0, 'D': 1, 'E': 2,
                           'F': 3, 'G': 4, 'A': 5, 'B': 6}

        # strip off any '+' or '-' accidentals, then grab the letter
        def letter(p: AgnosticPitch) -> str:
            name = p.name.replace('+', '').replace('-', '')
            return AgnosticPitch(name, p.octave).name

        base_letter_idx = LETTER_TO_INDEX[letter(self.base_pitch)]
        target_letter_idx = LETTER_TO_INDEX[letter(pitch)]

        # "octave difference × 7" plus the letter‐index difference
        diatonic_steps = (pitch.octave - self.base_pitch.octave) * 7 \
                         + (target_letter_idx - base_letter_idx)

        # that many "lines or spaces" above (or below) the reference line
        return PositionInStaff(diatonic_steps)

__init__(base_pitch)

Initializes the PitchPositionReferenceSystem object. Args: base_pitch (AgnosticPitch): The AgnosticPitch in the first line of the Staff. The AgnosticPitch object that serves as the reference point for the system.

Source code in kernpy/core/gkern.py
221
222
223
224
225
226
227
228
def __init__(self, base_pitch: AgnosticPitch):
    """
    Initializes the PitchPositionReferenceSystem object.
    Args:
        base_pitch (AgnosticPitch): The AgnosticPitch in the first line of the Staff. \
         The AgnosticPitch object that serves as the reference point for the system.
    """
    self.base_pitch = base_pitch

compute_position(pitch)

Computes the position in staff for the given pitch. Args: pitch (AgnosticPitch): The AgnosticPitch object to compute the position for. Returns: PositionInStaff: The PositionInStaff object representing the computed position.

Source code in kernpy/core/gkern.py
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
def compute_position(self, pitch: AgnosticPitch) -> PositionInStaff:
    """
    Computes the position in staff for the given pitch.
    Args:
        pitch (AgnosticPitch): The AgnosticPitch object to compute the position for.
    Returns:
        PositionInStaff: The PositionInStaff object representing the computed position.
    """
    # map A–G to 0–6
    LETTER_TO_INDEX = {'C': 0, 'D': 1, 'E': 2,
                       'F': 3, 'G': 4, 'A': 5, 'B': 6}

    # strip off any '+' or '-' accidentals, then grab the letter
    def letter(p: AgnosticPitch) -> str:
        name = p.name.replace('+', '').replace('-', '')
        return AgnosticPitch(name, p.octave).name

    base_letter_idx = LETTER_TO_INDEX[letter(self.base_pitch)]
    target_letter_idx = LETTER_TO_INDEX[letter(pitch)]

    # "octave difference × 7" plus the letter‐index difference
    diatonic_steps = (pitch.octave - self.base_pitch.octave) * 7 \
                     + (target_letter_idx - base_letter_idx)

    # that many "lines or spaces" above (or below) the reference line
    return PositionInStaff(diatonic_steps)

PitchRest

Represents a name or a rest in a note.

The name is represented using the International Standard Organization (ISO) name notation. The first line below the staff is the C4 in G clef. The above C is C5, the below C is C3, etc.

The Humdrum Kern format uses the following name representation: 'c' = C4 'cc' = C5 'ccc' = C6 'cccc' = C7

'C' = C3 'CC' = C2 'CCC' = C1

The rests are represented by the letter 'r'. The rests do not have name.

This class do not limit the name ranges.

In the following example, the name is represented by the letter 'c'. The name of 'c' is C4, 'cc' is C5, 'ccc' is C6.

**kern
*clefG2
2c          // C4
2cc         // C5
2ccc        // C6
2C          // C3
2CC         // C2
2CCC        // C1
*-
Source code in kernpy/core/tokens.py
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
class PitchRest:
    """
    Represents a name or a rest in a note.

    The name is represented using the International Standard Organization (ISO) name notation.
    The first line below the staff is the C4 in G clef. The above C is C5, the below C is C3, etc.

    The Humdrum Kern format uses the following name representation:
    'c' = C4
    'cc' = C5
    'ccc' = C6
    'cccc' = C7

    'C' = C3
    'CC' = C2
    'CCC' = C1

    The rests are represented by the letter 'r'. The rests do not have name.

    This class do not limit the name ranges.


    In the following example, the name is represented by the letter 'c'. The name of 'c' is C4, 'cc' is C5, 'ccc' is C6.
    ```
    **kern
    *clefG2
    2c          // C4
    2cc         // C5
    2ccc        // C6
    2C          // C3
    2CC         // C2
    2CCC        // C1
    *-
    ```
    """
    C4_PITCH_LOWERCASE = 'c'
    C4_OCATAVE = 4
    C3_PITCH_UPPERCASE = 'C'
    C3_OCATAVE = 3
    REST_CHARACTER = 'r'

    VALID_PITCHES = 'abcdefg' + 'ABCDEFG' + REST_CHARACTER

    def __init__(self, raw_pitch: str):
        """
        Create a new PitchRest object.

        Args:
            raw_pitch (str): name representation in Humdrum Kern format

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest = PitchRest('DDD')
        """
        if raw_pitch is None or len(raw_pitch) == 0:
            raise ValueError(f'Empty name: name can not be None or empty. But {raw_pitch} was provided.')

        self.encoding = raw_pitch
        self.pitch, self.octave = self.__parse_pitch_octave()

    def __parse_pitch_octave(self) -> (str, int):
        if self.encoding == PitchRest.REST_CHARACTER:
            return PitchRest.REST_CHARACTER, None

        if self.encoding.islower():
            min_octave = PitchRest.C4_OCATAVE
            octave = min_octave + (len(self.encoding) - 1)
            pitch = self.encoding[0].lower()
            return pitch, octave

        if self.encoding.isupper():
            max_octave = PitchRest.C3_OCATAVE
            octave = max_octave - (len(self.encoding) - 1)
            pitch = self.encoding[0].lower()
            return pitch, octave

        raise ValueError(f'Invalid name: name {self.encoding} is not a valid name representation.')

    def is_rest(self) -> bool:
        """
        Check if the name is a rest.

        Returns:
            bool: True if the name is a rest, False otherwise.

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest.is_rest()
            False
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest.is_rest()
            True
        """
        return self.octave is None

    @staticmethod
    def pitch_comparator(pitch_a: str, pitch_b: str) -> int:
        """
        Compare two pitches of the same octave.

        The lower name is 'a'. So 'a' < 'b' < 'c' < 'd' < 'e' < 'f' < 'g'

        Args:
            pitch_a: One name of 'abcdefg'
            pitch_b: Another name of 'abcdefg'

        Returns:
            -1 if pitch1 is lower than pitch2
            0 if pitch1 is equal to pitch2
            1 if pitch1 is higher than pitch2

        Examples:
            >>> PitchRest.pitch_comparator('c', 'c')
            0
            >>> PitchRest.pitch_comparator('c', 'd')
            -1
            >>> PitchRest.pitch_comparator('d', 'c')
            1
        """
        if pitch_a < pitch_b:
            return -1
        if pitch_a > pitch_b:
            return 1
        return 0

    def __str__(self):
        return f'{self.encoding}'

    def __repr__(self):
        return f'[PitchRest: {self.encoding}, name={self.pitch}, octave={self.octave}]'

    def __eq__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches and rests.

        Args:
            other (PitchRest): The other name to compare

        Returns (bool):
            True if the pitches are equal, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest == pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('ccc')
            >>> pitch_rest == pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest == pitch_rest2
            False
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest == pitch_rest2
            True

        """
        if not isinstance(other, PitchRest):
            return False
        if self.is_rest() and other.is_rest():
            return True
        if self.is_rest() or other.is_rest():
            return False
        return self.pitch == other.pitch and self.octave == other.octave

    def __ne__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches and rests.
        Args:
            other (PitchRest): The other name to compare

        Returns (bool):
            True if the pitches are different, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest != pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('ccc')
            >>> pitch_rest != pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest != pitch_rest2
            True
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest != pitch_rest2
            False
        """
        return not self.__eq__(other)

    def __gt__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches.

        If any of the pitches is a rest, the comparison raise an exception.

        Args:
            other (PitchRest): The other name to compare

        Returns (bool): True if this name is higher than the other, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('d')
            >>> pitch_rest > pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest > pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('b')
            >>> pitch_rest > pitch_rest2
            True
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest > pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest > pitch_rest2
            Traceback (most recent call last):
            ValueError: ...


        """
        if self.is_rest() or other.is_rest():
            raise ValueError(f'Invalid comparison: > operator can not be used to compare name of a rest.\n\
            self={repr(self)} > other={repr(other)}')

        if self.octave > other.octave:
            return True
        if self.octave == other.octave:
            return PitchRest.pitch_comparator(self.pitch, other.pitch) > 0
        return False

    def __lt__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches.

        If any of the pitches is a rest, the comparison raise an exception.

        Args:
            other: The other name to compare

        Returns:
            True if this name is lower than the other, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('d')
            >>> pitch_rest < pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest < pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('b')
            >>> pitch_rest < pitch_rest2
            False
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest < pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest < pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...

        """
        if self.is_rest() or other.is_rest():
            raise ValueError(f'Invalid comparison: < operator can not be used to compare name of a rest.\n\
            self={repr(self)} < other={repr(other)}')

        if self.octave < other.octave:
            return True
        if self.octave == other.octave:
            return PitchRest.pitch_comparator(self.pitch, other.pitch) < 0
        return False

    def __ge__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception.
        Args:
            other (PitchRest): The other name to compare

        Returns (bool):
            True if this name is higher or equal than the other, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('d')
            >>> pitch_rest >= pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest >= pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('b')
            >>> pitch_rest >= pitch_rest2
            True
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest >= pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest >= pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...


        """
        return self.__gt__(other) or self.__eq__(other)

    def __le__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception.
        Args:
            other (PitchRest): The other name to compare

        Returns (bool): True if this name is lower or equal than the other, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('d')
            >>> pitch_rest <= pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest <= pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('b')
            >>> pitch_rest <= pitch_rest2
            False
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest <= pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest <= pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...

        """
        return self.__lt__(other) or self.__eq__(other)

__eq__(other)

Compare two pitches and rests.

Parameters:

Name Type Description Default
other PitchRest

The other name to compare

required

Returns (bool): True if the pitches are equal, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest == pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('ccc')
>>> pitch_rest == pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest == pitch_rest2
False
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest == pitch_rest2
True
Source code in kernpy/core/tokens.py
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
def __eq__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches and rests.

    Args:
        other (PitchRest): The other name to compare

    Returns (bool):
        True if the pitches are equal, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest == pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('ccc')
        >>> pitch_rest == pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest == pitch_rest2
        False
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest == pitch_rest2
        True

    """
    if not isinstance(other, PitchRest):
        return False
    if self.is_rest() and other.is_rest():
        return True
    if self.is_rest() or other.is_rest():
        return False
    return self.pitch == other.pitch and self.octave == other.octave

__ge__(other)

Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception. Args: other (PitchRest): The other name to compare

Returns (bool): True if this name is higher or equal than the other, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('d')
>>> pitch_rest >= pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest >= pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('b')
>>> pitch_rest >= pitch_rest2
True
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest >= pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest >= pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
Source code in kernpy/core/tokens.py
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
def __ge__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception.
    Args:
        other (PitchRest): The other name to compare

    Returns (bool):
        True if this name is higher or equal than the other, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('d')
        >>> pitch_rest >= pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest >= pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('b')
        >>> pitch_rest >= pitch_rest2
        True
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest >= pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest >= pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...


    """
    return self.__gt__(other) or self.__eq__(other)

__gt__(other)

Compare two pitches.

If any of the pitches is a rest, the comparison raise an exception.

Parameters:

Name Type Description Default
other PitchRest

The other name to compare

required

Returns (bool): True if this name is higher than the other, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('d')
>>> pitch_rest > pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest > pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('b')
>>> pitch_rest > pitch_rest2
True
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest > pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest > pitch_rest2
Traceback (most recent call last):
ValueError: ...
Source code in kernpy/core/tokens.py
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
def __gt__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches.

    If any of the pitches is a rest, the comparison raise an exception.

    Args:
        other (PitchRest): The other name to compare

    Returns (bool): True if this name is higher than the other, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('d')
        >>> pitch_rest > pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest > pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('b')
        >>> pitch_rest > pitch_rest2
        True
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest > pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest > pitch_rest2
        Traceback (most recent call last):
        ValueError: ...


    """
    if self.is_rest() or other.is_rest():
        raise ValueError(f'Invalid comparison: > operator can not be used to compare name of a rest.\n\
        self={repr(self)} > other={repr(other)}')

    if self.octave > other.octave:
        return True
    if self.octave == other.octave:
        return PitchRest.pitch_comparator(self.pitch, other.pitch) > 0
    return False

__init__(raw_pitch)

Create a new PitchRest object.

Parameters:

Name Type Description Default
raw_pitch str

name representation in Humdrum Kern format

required

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest = PitchRest('r')
>>> pitch_rest = PitchRest('DDD')
Source code in kernpy/core/tokens.py
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
def __init__(self, raw_pitch: str):
    """
    Create a new PitchRest object.

    Args:
        raw_pitch (str): name representation in Humdrum Kern format

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest = PitchRest('DDD')
    """
    if raw_pitch is None or len(raw_pitch) == 0:
        raise ValueError(f'Empty name: name can not be None or empty. But {raw_pitch} was provided.')

    self.encoding = raw_pitch
    self.pitch, self.octave = self.__parse_pitch_octave()

__le__(other)

Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception. Args: other (PitchRest): The other name to compare

Returns (bool): True if this name is lower or equal than the other, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('d')
>>> pitch_rest <= pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest <= pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('b')
>>> pitch_rest <= pitch_rest2
False
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest <= pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest <= pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
Source code in kernpy/core/tokens.py
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def __le__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception.
    Args:
        other (PitchRest): The other name to compare

    Returns (bool): True if this name is lower or equal than the other, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('d')
        >>> pitch_rest <= pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest <= pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('b')
        >>> pitch_rest <= pitch_rest2
        False
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest <= pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest <= pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...

    """
    return self.__lt__(other) or self.__eq__(other)

__lt__(other)

Compare two pitches.

If any of the pitches is a rest, the comparison raise an exception.

Parameters:

Name Type Description Default
other 'PitchRest'

The other name to compare

required

Returns:

Type Description
bool

True if this name is lower than the other, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('d')
>>> pitch_rest < pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest < pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('b')
>>> pitch_rest < pitch_rest2
False
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest < pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest < pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
Source code in kernpy/core/tokens.py
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
def __lt__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches.

    If any of the pitches is a rest, the comparison raise an exception.

    Args:
        other: The other name to compare

    Returns:
        True if this name is lower than the other, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('d')
        >>> pitch_rest < pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest < pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('b')
        >>> pitch_rest < pitch_rest2
        False
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest < pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest < pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...

    """
    if self.is_rest() or other.is_rest():
        raise ValueError(f'Invalid comparison: < operator can not be used to compare name of a rest.\n\
        self={repr(self)} < other={repr(other)}')

    if self.octave < other.octave:
        return True
    if self.octave == other.octave:
        return PitchRest.pitch_comparator(self.pitch, other.pitch) < 0
    return False

__ne__(other)

Compare two pitches and rests. Args: other (PitchRest): The other name to compare

Returns (bool): True if the pitches are different, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest != pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('ccc')
>>> pitch_rest != pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest != pitch_rest2
True
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest != pitch_rest2
False
Source code in kernpy/core/tokens.py
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
def __ne__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches and rests.
    Args:
        other (PitchRest): The other name to compare

    Returns (bool):
        True if the pitches are different, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest != pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('ccc')
        >>> pitch_rest != pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest != pitch_rest2
        True
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest != pitch_rest2
        False
    """
    return not self.__eq__(other)

is_rest()

Check if the name is a rest.

Returns:

Name Type Description
bool bool

True if the name is a rest, False otherwise.

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest.is_rest()
False
>>> pitch_rest = PitchRest('r')
>>> pitch_rest.is_rest()
True
Source code in kernpy/core/tokens.py
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
def is_rest(self) -> bool:
    """
    Check if the name is a rest.

    Returns:
        bool: True if the name is a rest, False otherwise.

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest.is_rest()
        False
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest.is_rest()
        True
    """
    return self.octave is None

pitch_comparator(pitch_a, pitch_b) staticmethod

Compare two pitches of the same octave.

The lower name is 'a'. So 'a' < 'b' < 'c' < 'd' < 'e' < 'f' < 'g'

Parameters:

Name Type Description Default
pitch_a str

One name of 'abcdefg'

required
pitch_b str

Another name of 'abcdefg'

required

Returns:

Type Description
int

-1 if pitch1 is lower than pitch2

int

0 if pitch1 is equal to pitch2

int

1 if pitch1 is higher than pitch2

Examples:

>>> PitchRest.pitch_comparator('c', 'c')
0
>>> PitchRest.pitch_comparator('c', 'd')
-1
>>> PitchRest.pitch_comparator('d', 'c')
1
Source code in kernpy/core/tokens.py
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
@staticmethod
def pitch_comparator(pitch_a: str, pitch_b: str) -> int:
    """
    Compare two pitches of the same octave.

    The lower name is 'a'. So 'a' < 'b' < 'c' < 'd' < 'e' < 'f' < 'g'

    Args:
        pitch_a: One name of 'abcdefg'
        pitch_b: Another name of 'abcdefg'

    Returns:
        -1 if pitch1 is lower than pitch2
        0 if pitch1 is equal to pitch2
        1 if pitch1 is higher than pitch2

    Examples:
        >>> PitchRest.pitch_comparator('c', 'c')
        0
        >>> PitchRest.pitch_comparator('c', 'd')
        -1
        >>> PitchRest.pitch_comparator('d', 'c')
        1
    """
    if pitch_a < pitch_b:
        return -1
    if pitch_a > pitch_b:
        return 1
    return 0

PositionInStaff

Source code in kernpy/core/gkern.py
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
class PositionInStaff:
    LINE_CHARACTER = 'L'
    SPACE_CHARACTER = 'S'

    def __init__(self, line_space: int):
        """
        Initializes the PositionInStaff object.

        Args:
            line_space (int): 0 for bottom line, -1 for space under bottom line, 1 for space above bottom line. \
             Increments by 1 for each line or space.

        """
        self.line_space = line_space

    @classmethod
    def from_line(cls, line: int) -> PositionInStaff:
        """
        Creates a PositionInStaff object from a line number.

        Args:
            line (int): The line number. line 1 is bottom line, 2 is the 1st line from bottom, 0 is the bottom ledger line

        Returns:
            PositionInStaff: The PositionInStaff object. 0 for the bottom line, 2 for the 1st line from bottom, -1 for the bottom ledger line, etc.
        """
        return cls((line - 1) * 2)

    @classmethod
    def from_space(cls, space: int) -> PositionInStaff:
        """
        Creates a PositionInStaff object from a space number.

        Args:
            space (int): The space number. space 1 is bottom space, 2

        Returns:
            PositionInStaff: The PositionInStaff object.
        """
        return cls((space) * 2 - 1)

    @classmethod
    def from_encoded(cls, encoded: str) -> PositionInStaff:
        """
        Creates a PositionInStaff object from an encoded string.

        Args:
            encoded (str): The encoded string.

        Returns:
            PositionInStaff: The PositionInStaff object.
        """
        if encoded.startswith(cls.LINE_CHARACTER):
            line = int(encoded[1:])  # Extract the line number
            return cls.from_line(line)
        elif encoded.startswith(cls.SPACE_CHARACTER):
            space = int(encoded[1:])  # Extract the space number
            return cls.from_space(space)
        else:
            raise ValueError(f""
                             f"Invalid encoded string: {encoded}. "
                             f"Expected to start with '{cls.LINE_CHARACTER}' or '{cls.SPACE_CHARACTER} at the beginning.")


    def line(self):
        """
        Returns the line number of the position in staff.
        """
        return self.line_space // 2 + 1


    def space(self):
        """
        Returns the space number of the position in staff.
        """
        return (self.line_space - 1) // 2 + 1


    def is_line(self) -> bool:
        """
        Returns True if the position is a line, False otherwise. If is not a line, it is a space, and vice versa.
        """
        return self.line_space % 2 == 0

    def move(self, line_space_difference: int) -> PositionInStaff:
        """
        Returns a new PositionInStaff object with the position moved by the given number of lines or spaces.

        Args:
            line_space_difference (int): The number of lines or spaces to move.

        Returns:
            PositionInStaff: The new PositionInStaff object.
        """
        return PositionInStaff(self.line_space + line_space_difference)

    def position_below(self) -> PositionInStaff:
        """
        Returns the position below the current position.
        """
        return self.move(-2)

    def position_above(self) -> PositionInStaff:
        """
        Returns the position above the current position.
        """
        return self.move(2)



    def __str__(self) -> str:
        """
        Returns the string representation of the position in staff.
        """
        if self.is_line():
            return f"{self.LINE_CHARACTER}{int(self.line())}"
        else:
            return f"{self.SPACE_CHARACTER}{int(self.space())}"

    def __repr__(self) -> str:
        """
        Returns the string representation of the PositionInStaff object.
        """
        return f"PositionInStaff(line_space={self.line_space}), {self.__str__()}"

    def __eq__(self, other) -> bool:
        """
        Compares two PositionInStaff objects.
        """
        if not isinstance(other, PositionInStaff):
            return False
        return self.line_space == other.line_space

    def __ne__(self, other) -> bool:
        """
        Compares two PositionInStaff objects.
        """
        return not self.__eq__(other)

    def __hash__(self) -> int:
        """
        Returns the hash of the PositionInStaff object.
        """
        return hash(self.line_space)

    def __lt__(self, other) -> bool:
        """
        Compares two PositionInStaff objects.
        """
        if not isinstance(other, PositionInStaff):
            return NotImplemented
        return self.line_space < other.line_space

__eq__(other)

Compares two PositionInStaff objects.

Source code in kernpy/core/gkern.py
180
181
182
183
184
185
186
def __eq__(self, other) -> bool:
    """
    Compares two PositionInStaff objects.
    """
    if not isinstance(other, PositionInStaff):
        return False
    return self.line_space == other.line_space

__hash__()

Returns the hash of the PositionInStaff object.

Source code in kernpy/core/gkern.py
194
195
196
197
198
def __hash__(self) -> int:
    """
    Returns the hash of the PositionInStaff object.
    """
    return hash(self.line_space)

__init__(line_space)

Initializes the PositionInStaff object.

Parameters:

Name Type Description Default
line_space int

0 for bottom line, -1 for space under bottom line, 1 for space above bottom line. Increments by 1 for each line or space.

required
Source code in kernpy/core/gkern.py
59
60
61
62
63
64
65
66
67
68
def __init__(self, line_space: int):
    """
    Initializes the PositionInStaff object.

    Args:
        line_space (int): 0 for bottom line, -1 for space under bottom line, 1 for space above bottom line. \
         Increments by 1 for each line or space.

    """
    self.line_space = line_space

__lt__(other)

Compares two PositionInStaff objects.

Source code in kernpy/core/gkern.py
200
201
202
203
204
205
206
def __lt__(self, other) -> bool:
    """
    Compares two PositionInStaff objects.
    """
    if not isinstance(other, PositionInStaff):
        return NotImplemented
    return self.line_space < other.line_space

__ne__(other)

Compares two PositionInStaff objects.

Source code in kernpy/core/gkern.py
188
189
190
191
192
def __ne__(self, other) -> bool:
    """
    Compares two PositionInStaff objects.
    """
    return not self.__eq__(other)

__repr__()

Returns the string representation of the PositionInStaff object.

Source code in kernpy/core/gkern.py
174
175
176
177
178
def __repr__(self) -> str:
    """
    Returns the string representation of the PositionInStaff object.
    """
    return f"PositionInStaff(line_space={self.line_space}), {self.__str__()}"

__str__()

Returns the string representation of the position in staff.

Source code in kernpy/core/gkern.py
165
166
167
168
169
170
171
172
def __str__(self) -> str:
    """
    Returns the string representation of the position in staff.
    """
    if self.is_line():
        return f"{self.LINE_CHARACTER}{int(self.line())}"
    else:
        return f"{self.SPACE_CHARACTER}{int(self.space())}"

from_encoded(encoded) classmethod

Creates a PositionInStaff object from an encoded string.

Parameters:

Name Type Description Default
encoded str

The encoded string.

required

Returns:

Name Type Description
PositionInStaff PositionInStaff

The PositionInStaff object.

Source code in kernpy/core/gkern.py
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
@classmethod
def from_encoded(cls, encoded: str) -> PositionInStaff:
    """
    Creates a PositionInStaff object from an encoded string.

    Args:
        encoded (str): The encoded string.

    Returns:
        PositionInStaff: The PositionInStaff object.
    """
    if encoded.startswith(cls.LINE_CHARACTER):
        line = int(encoded[1:])  # Extract the line number
        return cls.from_line(line)
    elif encoded.startswith(cls.SPACE_CHARACTER):
        space = int(encoded[1:])  # Extract the space number
        return cls.from_space(space)
    else:
        raise ValueError(f""
                         f"Invalid encoded string: {encoded}. "
                         f"Expected to start with '{cls.LINE_CHARACTER}' or '{cls.SPACE_CHARACTER} at the beginning.")

from_line(line) classmethod

Creates a PositionInStaff object from a line number.

Parameters:

Name Type Description Default
line int

The line number. line 1 is bottom line, 2 is the 1st line from bottom, 0 is the bottom ledger line

required

Returns:

Name Type Description
PositionInStaff PositionInStaff

The PositionInStaff object. 0 for the bottom line, 2 for the 1st line from bottom, -1 for the bottom ledger line, etc.

Source code in kernpy/core/gkern.py
70
71
72
73
74
75
76
77
78
79
80
81
@classmethod
def from_line(cls, line: int) -> PositionInStaff:
    """
    Creates a PositionInStaff object from a line number.

    Args:
        line (int): The line number. line 1 is bottom line, 2 is the 1st line from bottom, 0 is the bottom ledger line

    Returns:
        PositionInStaff: The PositionInStaff object. 0 for the bottom line, 2 for the 1st line from bottom, -1 for the bottom ledger line, etc.
    """
    return cls((line - 1) * 2)

from_space(space) classmethod

Creates a PositionInStaff object from a space number.

Parameters:

Name Type Description Default
space int

The space number. space 1 is bottom space, 2

required

Returns:

Name Type Description
PositionInStaff PositionInStaff

The PositionInStaff object.

Source code in kernpy/core/gkern.py
83
84
85
86
87
88
89
90
91
92
93
94
@classmethod
def from_space(cls, space: int) -> PositionInStaff:
    """
    Creates a PositionInStaff object from a space number.

    Args:
        space (int): The space number. space 1 is bottom space, 2

    Returns:
        PositionInStaff: The PositionInStaff object.
    """
    return cls((space) * 2 - 1)

is_line()

Returns True if the position is a line, False otherwise. If is not a line, it is a space, and vice versa.

Source code in kernpy/core/gkern.py
133
134
135
136
137
def is_line(self) -> bool:
    """
    Returns True if the position is a line, False otherwise. If is not a line, it is a space, and vice versa.
    """
    return self.line_space % 2 == 0

line()

Returns the line number of the position in staff.

Source code in kernpy/core/gkern.py
119
120
121
122
123
def line(self):
    """
    Returns the line number of the position in staff.
    """
    return self.line_space // 2 + 1

move(line_space_difference)

Returns a new PositionInStaff object with the position moved by the given number of lines or spaces.

Parameters:

Name Type Description Default
line_space_difference int

The number of lines or spaces to move.

required

Returns:

Name Type Description
PositionInStaff PositionInStaff

The new PositionInStaff object.

Source code in kernpy/core/gkern.py
139
140
141
142
143
144
145
146
147
148
149
def move(self, line_space_difference: int) -> PositionInStaff:
    """
    Returns a new PositionInStaff object with the position moved by the given number of lines or spaces.

    Args:
        line_space_difference (int): The number of lines or spaces to move.

    Returns:
        PositionInStaff: The new PositionInStaff object.
    """
    return PositionInStaff(self.line_space + line_space_difference)

position_above()

Returns the position above the current position.

Source code in kernpy/core/gkern.py
157
158
159
160
161
def position_above(self) -> PositionInStaff:
    """
    Returns the position above the current position.
    """
    return self.move(2)

position_below()

Returns the position below the current position.

Source code in kernpy/core/gkern.py
151
152
153
154
155
def position_below(self) -> PositionInStaff:
    """
    Returns the position below the current position.
    """
    return self.move(-2)

space()

Returns the space number of the position in staff.

Source code in kernpy/core/gkern.py
126
127
128
129
130
def space(self):
    """
    Returns the space number of the position in staff.
    """
    return (self.line_space - 1) // 2 + 1

RootSpineImporter

Bases: SpineImporter

Source code in kernpy/core/root_spine_importer.py
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
class RootSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        #return RootSpineListener() # TODO: Create a custom functional listener for RootSpineImporter
        return KernSpineListener()

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        kern_spine_importer = KernSpineImporter()
        token = kern_spine_importer.import_token(encoding)

        return token  # The **root spine tokens are always a subset of the **kern spine tokens

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/root_spine_importer.py
38
39
40
41
42
43
44
45
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

SimpleToken

Bases: Token

SimpleToken class.

Source code in kernpy/core/tokens.py
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
class SimpleToken(Token):
    """
    SimpleToken class.
    """

    def __init__(self, encoding, category):
        super().__init__(encoding, category)

    def export(self, **kwargs) -> str:
        """
        Exports the token.

        Args:
            **kwargs: 'filter_categories' (Optional[Callable[[TokenCategory], bool]]): It is ignored in this class.

        Returns (str): The encoded token representation.
        """
        return self.encoding

export(**kwargs)

Exports the token.

Parameters:

Name Type Description Default
**kwargs

'filter_categories' (Optional[Callable[[TokenCategory], bool]]): It is ignored in this class.

{}

Returns (str): The encoded token representation.

Source code in kernpy/core/tokens.py
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
def export(self, **kwargs) -> str:
    """
    Exports the token.

    Args:
        **kwargs: 'filter_categories' (Optional[Callable[[TokenCategory], bool]]): It is ignored in this class.

    Returns (str): The encoded token representation.
    """
    return self.encoding

SpineOperationToken

Bases: SimpleToken

SpineOperationToken class.

This token represents different operations in the Humdrum kern encoding. These are the available operations: - *-: spine-path terminator. - *: null interpretation. - *+: add spines. - *^: split spines. - *x: exchange spines.

Attributes:

Name Type Description
cancelled_at_stage int

The stage at which the operation was cancelled. Defaults to None.

Source code in kernpy/core/tokens.py
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
class SpineOperationToken(SimpleToken):
    """
    SpineOperationToken class.

    This token represents different operations in the Humdrum kern encoding.
    These are the available operations:
        - `*-`:  spine-path terminator.
        - `*`: null interpretation.
        - `*+`: add spines.
        - `*^`: split spines.
        - `*x`: exchange spines.

    Attributes:
        cancelled_at_stage (int): The stage at which the operation was cancelled. Defaults to None.
    """

    def __init__(self, encoding):
        super().__init__(encoding, TokenCategory.SPINE_OPERATION)
        self.cancelled_at_stage = None

    def is_cancelled_at(self, stage) -> bool:
        """
        Checks if the operation was cancelled at the given stage.

        Args:
            stage (int): The stage at which the operation was cancelled.

        Returns:
            bool: True if the operation was cancelled at the given stage, False otherwise.
        """
        if self.cancelled_at_stage is None:
            return False
        else:
            return self.cancelled_at_stage < stage

is_cancelled_at(stage)

Checks if the operation was cancelled at the given stage.

Parameters:

Name Type Description Default
stage int

The stage at which the operation was cancelled.

required

Returns:

Name Type Description
bool bool

True if the operation was cancelled at the given stage, False otherwise.

Source code in kernpy/core/tokens.py
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
def is_cancelled_at(self, stage) -> bool:
    """
    Checks if the operation was cancelled at the given stage.

    Args:
        stage (int): The stage at which the operation was cancelled.

    Returns:
        bool: True if the operation was cancelled at the given stage, False otherwise.
    """
    if self.cancelled_at_stage is None:
        return False
    else:
        return self.cancelled_at_stage < stage

StoreCache

A simple cache that stores the result of a callback function

Source code in kernpy/util/store_cache.py
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class StoreCache:
    """
    A simple cache that stores the result of a callback function
    """
    def __init__(self):
        """
        Constructor
        """
        self.memory = {}

    def request(self, callback, request):
        """
        Request a value from the cache. If the value is not in the cache, it will be calculated by the callback function
        Args:
            callback (function): The callback function that will be called to calculate the value
            request (any): The request that will be passed to the callback function

        Returns (any): The value that was requested

        Examples:
            >>> def add_five(x):
            ...     return x + 5
            >>> store_cache = StoreCache()
            >>> store_cache.request(callback, 5)  # Call the callback function
            10
            >>> store_cache.request(callback, 5)  # Return the value from the cache, without calling the callback function
            10
        """
        if request in self.memory:
            return self.memory[request]
        else:
            result = callback(request)
            self.memory[request] = result
            return result

__init__()

Constructor

Source code in kernpy/util/store_cache.py
5
6
7
8
9
def __init__(self):
    """
    Constructor
    """
    self.memory = {}

request(callback, request)

Request a value from the cache. If the value is not in the cache, it will be calculated by the callback function Args: callback (function): The callback function that will be called to calculate the value request (any): The request that will be passed to the callback function

Returns (any): The value that was requested

Examples:

>>> def add_five(x):
...     return x + 5
>>> store_cache = StoreCache()
>>> store_cache.request(callback, 5)  # Call the callback function
10
>>> store_cache.request(callback, 5)  # Return the value from the cache, without calling the callback function
10
Source code in kernpy/util/store_cache.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
def request(self, callback, request):
    """
    Request a value from the cache. If the value is not in the cache, it will be calculated by the callback function
    Args:
        callback (function): The callback function that will be called to calculate the value
        request (any): The request that will be passed to the callback function

    Returns (any): The value that was requested

    Examples:
        >>> def add_five(x):
        ...     return x + 5
        >>> store_cache = StoreCache()
        >>> store_cache.request(callback, 5)  # Call the callback function
        10
        >>> store_cache.request(callback, 5)  # Return the value from the cache, without calling the callback function
        10
    """
    if request in self.memory:
        return self.memory[request]
    else:
        result = callback(request)
        self.memory[request] = result
        return result

Subtoken

Subtoken class. Thhe subtokens are the smallest units of categories. ComplexToken objects are composed of subtokens.

Attributes:

Name Type Description
encoding

The complete unprocessed encoding

category

The subtoken category, one of SubTokenCategory

Source code in kernpy/core/tokens.py
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
class Subtoken:
    """
    Subtoken class. Thhe subtokens are the smallest units of categories. ComplexToken objects are composed of subtokens.

    Attributes:
        encoding: The complete unprocessed encoding
        category: The subtoken category, one of SubTokenCategory
    """
    DECORATION = None

    def __init__(self, encoding: str, category: TokenCategory):
        """
        Subtoken constructor

        Args:
            encoding (str): The complete unprocessed encoding
            category (TokenCategory): The subtoken category. \
                It should be a child of the main 'TokenCategory' in the hierarchy.

        """
        self.encoding = encoding
        self.category = category

    def __str__(self):
        """
        Returns the string representation of the subtoken.

        Returns (str): The string representation of the subtoken.
        """
        return self.encoding

    def __eq__(self, other):
        """
        Compare two subtokens.

        Args:
            other (Subtoken): The other subtoken to compare.
        Returns (bool): True if the subtokens are equal, False otherwise.
        """
        if not isinstance(other, Subtoken):
            return False
        return self.encoding == other.encoding and self.category == other.category

    def __ne__(self, other):
        """
        Compare two subtokens.

        Args:
            other (Subtoken): The other subtoken to compare.
        Returns (bool): True if the subtokens are different, False otherwise.
        """
        return not self.__eq__(other)

    def __hash__(self):
        """
        Returns the hash of the subtoken.

        Returns (int): The hash of the subtoken.
        """
        return hash((self.encoding, self.category))

__eq__(other)

Compare two subtokens.

Parameters:

Name Type Description Default
other Subtoken

The other subtoken to compare.

required

Returns (bool): True if the subtokens are equal, False otherwise.

Source code in kernpy/core/tokens.py
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
def __eq__(self, other):
    """
    Compare two subtokens.

    Args:
        other (Subtoken): The other subtoken to compare.
    Returns (bool): True if the subtokens are equal, False otherwise.
    """
    if not isinstance(other, Subtoken):
        return False
    return self.encoding == other.encoding and self.category == other.category

__hash__()

Returns the hash of the subtoken.

Returns (int): The hash of the subtoken.

Source code in kernpy/core/tokens.py
1376
1377
1378
1379
1380
1381
1382
def __hash__(self):
    """
    Returns the hash of the subtoken.

    Returns (int): The hash of the subtoken.
    """
    return hash((self.encoding, self.category))

__init__(encoding, category)

Subtoken constructor

Parameters:

Name Type Description Default
encoding str

The complete unprocessed encoding

required
category TokenCategory

The subtoken category. It should be a child of the main 'TokenCategory' in the hierarchy.

required
Source code in kernpy/core/tokens.py
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
def __init__(self, encoding: str, category: TokenCategory):
    """
    Subtoken constructor

    Args:
        encoding (str): The complete unprocessed encoding
        category (TokenCategory): The subtoken category. \
            It should be a child of the main 'TokenCategory' in the hierarchy.

    """
    self.encoding = encoding
    self.category = category

__ne__(other)

Compare two subtokens.

Parameters:

Name Type Description Default
other Subtoken

The other subtoken to compare.

required

Returns (bool): True if the subtokens are different, False otherwise.

Source code in kernpy/core/tokens.py
1366
1367
1368
1369
1370
1371
1372
1373
1374
def __ne__(self, other):
    """
    Compare two subtokens.

    Args:
        other (Subtoken): The other subtoken to compare.
    Returns (bool): True if the subtokens are different, False otherwise.
    """
    return not self.__eq__(other)

__str__()

Returns the string representation of the subtoken.

Returns (str): The string representation of the subtoken.

Source code in kernpy/core/tokens.py
1346
1347
1348
1349
1350
1351
1352
def __str__(self):
    """
    Returns the string representation of the subtoken.

    Returns (str): The string representation of the subtoken.
    """
    return self.encoding

TextSpineImporter

Bases: SpineImporter

Source code in kernpy/core/text_spine_importer.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
class TextSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()  # TODO: Create a custom functional listener for TextSpineImporter

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.LYRICS)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.BARLINES,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if not any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.LYRICS)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/text_spine_importer.py
12
13
14
15
16
17
18
19
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

Token

Bases: AbstractToken, ABC

Abstract Token class.

Source code in kernpy/core/tokens.py
1471
1472
1473
1474
1475
1476
1477
class Token(AbstractToken, ABC):
    """
    Abstract Token class.
    """

    def __init__(self, encoding, category):
        super().__init__(encoding, category)

TokenCategory

Bases: Enum

Options for the category of a token.

This is used to determine what kind of token should be exported.

The categories are sorted the specific order they are compared to sorthem. But hierarchical order must be defined in other data structures.

Source code in kernpy/core/tokens.py
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
class TokenCategory(Enum):
    """
    Options for the category of a token.

    This is used to determine what kind of token should be exported.

    The categories are sorted the specific order they are compared to sorthem. But hierarchical order must be defined in other data structures.
    """
    STRUCTURAL = auto()  # header, spine operations
    HEADER = auto()  # **kern, **mens, **text, **harm, **mxhm, **root, **dyn, **dynam, **fing
    SPINE_OPERATION = auto()
    CORE = auto() # notes, rests, chords, etc.
    ERROR = auto()
    NOTE_REST = auto()
    NOTE = auto()
    DURATION = auto()
    PITCH = auto()
    ALTERATION = auto()
    DECORATION = auto()
    REST = auto()
    CHORD = auto()
    EMPTY = auto()  # placeholders, null interpretation
    SIGNATURES = auto()
    CLEF = auto()
    TIME_SIGNATURE = auto()
    METER_SYMBOL = auto()
    KEY_SIGNATURE = auto()
    KEY_TOKEN = auto()
    ENGRAVED_SYMBOLS = auto()
    OTHER_CONTEXTUAL = auto()
    BARLINES = auto()
    COMMENTS = auto()
    FIELD_COMMENTS = auto()
    LINE_COMMENTS = auto()
    DYNAMICS = auto()
    HARMONY = auto()
    FINGERING = auto()
    LYRICS = auto()
    INSTRUMENTS = auto()
    IMAGE_ANNOTATIONS = auto()
    BOUNDING_BOXES = auto()
    LINE_BREAK = auto()
    OTHER = auto()
    MHXM = auto()
    ROOT = auto()

    def __lt__(self, other):
        """
        Compare two TokenCategory.
        Args:
            other (TokenCategory): The other category to compare.

        Returns (bool): True if this category is lower than the other, False otherwise.

        Examples:
            >>> TokenCategory.STRUCTURAL < TokenCategory.CORE
            True
            >>> TokenCategory.STRUCTURAL < TokenCategory.STRUCTURAL
            False
            >>> TokenCategory.CORE < TokenCategory.STRUCTURAL
            False
            >>> sorted([TokenCategory.STRUCTURAL, TokenCategory.CORE])
            [TokenCategory.STRUCTURAL, TokenCategory.CORE]
        """
        if isinstance(other, TokenCategory):
            return self.value < other.value
        return NotImplemented

    @classmethod
    def all(cls) -> Set[TokenCategory]:
        f"""
        Get all categories in the hierarchy.

        Returns:
            Set[TokenCategory]: The set of all categories in the hierarchy.

        Examples:
            >>> import kernpy as kp
            >>> kp.TokenCategory.all()
            set([<TokenCategory.MHXM: 29>, <TokenCategory.COMMENTS: 19>, <TokenCategory.BARLINES: 18>, <TokenCategory.CORE: 2>, <TokenCategory.BOUNDING_BOXES: 27>, <TokenCategory.NOTE_REST: 3>, <TokenCategory.NOTE: 4>, <TokenCategory.ENGRAVED_SYMBOLS: 16>, <TokenCategory.SIGNATURES: 11>, <TokenCategory.REST: 8>, <TokenCategory.METER_SYMBOL: 14>, <TokenCategory.HARMONY: 23>, <TokenCategory.KEY_SIGNATURE: 15>, <TokenCategory.EMPTY: 10>, <TokenCategory.PITCH: 6>, <TokenCategory.LINE_COMMENTS: 21>, <TokenCategory.FINGERING: 24>, <TokenCategory.DECORATION: 7>, <TokenCategory.OTHER: 28>, <TokenCategory.INSTRUMENTS: 26>, <TokenCategory.STRUCTURAL: 1>, <TokenCategory.FIELD_COMMENTS: 20>, <TokenCategory.LYRICS: 25>, <TokenCategory.CLEF: 12>, <TokenCategory.DURATION: 5>, <TokenCategory.DYNAMICS: 22>, <TokenCategory.CHORD: 9>, <TokenCategory.TIME_SIGNATURE: 13>, <TokenCategory.OTHER_CONTEXTUAL: 17>])
        """
        return set([t for t in TokenCategory])

    @classmethod
    def tree(cls):
        """
        Return a string representation of the category hierarchy
        Returns (str): The string representation of the category hierarchy

        Examples:
            >>> import kernpy as kp
            >>> print(kp.TokenCategory.tree())
            .
            ├── TokenCategory.STRUCTURAL
            ├── TokenCategory.CORE
            │   ├── TokenCategory.NOTE_REST
            │   │   ├── TokenCategory.DURATION
            │   │   ├── TokenCategory.NOTE
            │   │   │   ├── TokenCategory.PITCH
            │   │   │   └── TokenCategory.DECORATION
            │   │   └── TokenCategory.REST
            │   ├── TokenCategory.CHORD
            │   └── TokenCategory.EMPTY
            ├── TokenCategory.SIGNATURES
            │   ├── TokenCategory.CLEF
            │   ├── TokenCategory.TIME_SIGNATURE
            │   ├── TokenCategory.METER_SYMBOL
            │   └── TokenCategory.KEY_SIGNATURE
            ├── TokenCategory.ENGRAVED_SYMBOLS
            ├── TokenCategory.OTHER_CONTEXTUAL
            ├── TokenCategory.BARLINES
            ├── TokenCategory.COMMENTS
            │   ├── TokenCategory.FIELD_COMMENTS
            │   └── TokenCategory.LINE_COMMENTS
            ├── TokenCategory.DYNAMICS
            ├── TokenCategory.HARMONY
            ├── TokenCategory.FINGERING
            ├── TokenCategory.LYRICS
            ├── TokenCategory.INSTRUMENTS
            ├── TokenCategory.BOUNDING_BOXES
            └── TokenCategory.OTHER
        """
        return TokenCategoryHierarchyMapper.tree()

    @classmethod
    def is_child(cls, *, child: TokenCategory, parent: TokenCategory) -> bool:
        """
        Check if the child category is a child of the parent category.

        Args:
            child (TokenCategory): The child category.
            parent (TokenCategory): The parent category.

        Returns (bool): True if the child category is a child of the parent category, False otherwise.
        """
        return TokenCategoryHierarchyMapper.is_child(parent=parent, child=child)

    @classmethod
    def children(cls, target: TokenCategory) -> Set[TokenCategory]:
        """
        Get the children of the target category.

        Args:
            target (TokenCategory): The target category.

        Returns (List[TokenCategory]): The list of child categories of the target category.
        """
        return TokenCategoryHierarchyMapper.children(parent=target)

    @classmethod
    def valid(cls, *, include: Optional[Set[TokenCategory]] = None, exclude: Optional[Set[TokenCategory]] = None) -> Set[TokenCategory]:
        """
        Get the valid categories based on the include and exclude sets.

        Args:
            include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
                If None, all categories are included.
            exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
                If None, no categories are excluded.

        Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.
        """
        return TokenCategoryHierarchyMapper.valid(include=include, exclude=exclude)

    @classmethod
    def leaves(cls, target: TokenCategory) -> Set[TokenCategory]:
        """
        Get the leaves of the subtree of the target category.

        Args:
            target (TokenCategory): The target category.

        Returns (List[TokenCategory]): The list of leaf categories of the target category.
        """
        return TokenCategoryHierarchyMapper.leaves(target=target)

    @classmethod
    def nodes(cls, target: TokenCategory) -> Set[TokenCategory]:
        """
        Get the nodes of the subtree of the target category.

        Args:
            target (TokenCategory): The target category.

        Returns (List[TokenCategory]): The list of node categories of the target category.
        """
        return TokenCategoryHierarchyMapper.nodes(parent=target)

    @classmethod
    def match(cls,
              target: TokenCategory, *,
              include: Optional[Set[TokenCategory]] = None,
              exclude: Optional[Set[TokenCategory]] = None) -> bool:
        """
        Check if the target category matches the include and exclude sets.

        Args:
            target (TokenCategory): The target category.
            include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
                If None, all categories are included.
            exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
                If None, no categories are excluded.

        Returns (bool): True if the target category matches the include and exclude sets, False otherwise.
        """
        return TokenCategoryHierarchyMapper.match(category=target, include=include, exclude=exclude)

    def __str__(self):
        """
        Get the string representation of the category.

        Returns (str): The string representation of the category.
        """
        return self.name

__lt__(other)

Compare two TokenCategory. Args: other (TokenCategory): The other category to compare.

Returns (bool): True if this category is lower than the other, False otherwise.

Examples:

>>> TokenCategory.STRUCTURAL < TokenCategory.CORE
True
>>> TokenCategory.STRUCTURAL < TokenCategory.STRUCTURAL
False
>>> TokenCategory.CORE < TokenCategory.STRUCTURAL
False
>>> sorted([TokenCategory.STRUCTURAL, TokenCategory.CORE])
[TokenCategory.STRUCTURAL, TokenCategory.CORE]
Source code in kernpy/core/tokens.py
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __lt__(self, other):
    """
    Compare two TokenCategory.
    Args:
        other (TokenCategory): The other category to compare.

    Returns (bool): True if this category is lower than the other, False otherwise.

    Examples:
        >>> TokenCategory.STRUCTURAL < TokenCategory.CORE
        True
        >>> TokenCategory.STRUCTURAL < TokenCategory.STRUCTURAL
        False
        >>> TokenCategory.CORE < TokenCategory.STRUCTURAL
        False
        >>> sorted([TokenCategory.STRUCTURAL, TokenCategory.CORE])
        [TokenCategory.STRUCTURAL, TokenCategory.CORE]
    """
    if isinstance(other, TokenCategory):
        return self.value < other.value
    return NotImplemented

__str__()

Get the string representation of the category.

Returns (str): The string representation of the category.

Source code in kernpy/core/tokens.py
230
231
232
233
234
235
236
def __str__(self):
    """
    Get the string representation of the category.

    Returns (str): The string representation of the category.
    """
    return self.name

children(target) classmethod

Get the children of the target category.

Parameters:

Name Type Description Default
target TokenCategory

The target category.

required

Returns (List[TokenCategory]): The list of child categories of the target category.

Source code in kernpy/core/tokens.py
160
161
162
163
164
165
166
167
168
169
170
@classmethod
def children(cls, target: TokenCategory) -> Set[TokenCategory]:
    """
    Get the children of the target category.

    Args:
        target (TokenCategory): The target category.

    Returns (List[TokenCategory]): The list of child categories of the target category.
    """
    return TokenCategoryHierarchyMapper.children(parent=target)

is_child(*, child, parent) classmethod

Check if the child category is a child of the parent category.

Parameters:

Name Type Description Default
child TokenCategory

The child category.

required
parent TokenCategory

The parent category.

required

Returns (bool): True if the child category is a child of the parent category, False otherwise.

Source code in kernpy/core/tokens.py
147
148
149
150
151
152
153
154
155
156
157
158
@classmethod
def is_child(cls, *, child: TokenCategory, parent: TokenCategory) -> bool:
    """
    Check if the child category is a child of the parent category.

    Args:
        child (TokenCategory): The child category.
        parent (TokenCategory): The parent category.

    Returns (bool): True if the child category is a child of the parent category, False otherwise.
    """
    return TokenCategoryHierarchyMapper.is_child(parent=parent, child=child)

leaves(target) classmethod

Get the leaves of the subtree of the target category.

Parameters:

Name Type Description Default
target TokenCategory

The target category.

required

Returns (List[TokenCategory]): The list of leaf categories of the target category.

Source code in kernpy/core/tokens.py
187
188
189
190
191
192
193
194
195
196
197
@classmethod
def leaves(cls, target: TokenCategory) -> Set[TokenCategory]:
    """
    Get the leaves of the subtree of the target category.

    Args:
        target (TokenCategory): The target category.

    Returns (List[TokenCategory]): The list of leaf categories of the target category.
    """
    return TokenCategoryHierarchyMapper.leaves(target=target)

match(target, *, include=None, exclude=None) classmethod

Check if the target category matches the include and exclude sets.

Parameters:

Name Type Description Default
target TokenCategory

The target category.

required
include Optional[Set[TokenCategory]]

The set of categories to include. Defaults to None. If None, all categories are included.

None
exclude Optional[Set[TokenCategory]]

The set of categories to exclude. Defaults to None. If None, no categories are excluded.

None

Returns (bool): True if the target category matches the include and exclude sets, False otherwise.

Source code in kernpy/core/tokens.py
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
@classmethod
def match(cls,
          target: TokenCategory, *,
          include: Optional[Set[TokenCategory]] = None,
          exclude: Optional[Set[TokenCategory]] = None) -> bool:
    """
    Check if the target category matches the include and exclude sets.

    Args:
        target (TokenCategory): The target category.
        include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
            If None, all categories are included.
        exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
            If None, no categories are excluded.

    Returns (bool): True if the target category matches the include and exclude sets, False otherwise.
    """
    return TokenCategoryHierarchyMapper.match(category=target, include=include, exclude=exclude)

nodes(target) classmethod

Get the nodes of the subtree of the target category.

Parameters:

Name Type Description Default
target TokenCategory

The target category.

required

Returns (List[TokenCategory]): The list of node categories of the target category.

Source code in kernpy/core/tokens.py
199
200
201
202
203
204
205
206
207
208
209
@classmethod
def nodes(cls, target: TokenCategory) -> Set[TokenCategory]:
    """
    Get the nodes of the subtree of the target category.

    Args:
        target (TokenCategory): The target category.

    Returns (List[TokenCategory]): The list of node categories of the target category.
    """
    return TokenCategoryHierarchyMapper.nodes(parent=target)

tree() classmethod

Return a string representation of the category hierarchy Returns (str): The string representation of the category hierarchy

Examples:

>>> import kernpy as kp
>>> print(kp.TokenCategory.tree())
.
├── TokenCategory.STRUCTURAL
├── TokenCategory.CORE
│   ├── TokenCategory.NOTE_REST
│   │   ├── TokenCategory.DURATION
│   │   ├── TokenCategory.NOTE
│   │   │   ├── TokenCategory.PITCH
│   │   │   └── TokenCategory.DECORATION
│   │   └── TokenCategory.REST
│   ├── TokenCategory.CHORD
│   └── TokenCategory.EMPTY
├── TokenCategory.SIGNATURES
│   ├── TokenCategory.CLEF
│   ├── TokenCategory.TIME_SIGNATURE
│   ├── TokenCategory.METER_SYMBOL
│   └── TokenCategory.KEY_SIGNATURE
├── TokenCategory.ENGRAVED_SYMBOLS
├── TokenCategory.OTHER_CONTEXTUAL
├── TokenCategory.BARLINES
├── TokenCategory.COMMENTS
│   ├── TokenCategory.FIELD_COMMENTS
│   └── TokenCategory.LINE_COMMENTS
├── TokenCategory.DYNAMICS
├── TokenCategory.HARMONY
├── TokenCategory.FINGERING
├── TokenCategory.LYRICS
├── TokenCategory.INSTRUMENTS
├── TokenCategory.BOUNDING_BOXES
└── TokenCategory.OTHER
Source code in kernpy/core/tokens.py
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
@classmethod
def tree(cls):
    """
    Return a string representation of the category hierarchy
    Returns (str): The string representation of the category hierarchy

    Examples:
        >>> import kernpy as kp
        >>> print(kp.TokenCategory.tree())
        .
        ├── TokenCategory.STRUCTURAL
        ├── TokenCategory.CORE
        │   ├── TokenCategory.NOTE_REST
        │   │   ├── TokenCategory.DURATION
        │   │   ├── TokenCategory.NOTE
        │   │   │   ├── TokenCategory.PITCH
        │   │   │   └── TokenCategory.DECORATION
        │   │   └── TokenCategory.REST
        │   ├── TokenCategory.CHORD
        │   └── TokenCategory.EMPTY
        ├── TokenCategory.SIGNATURES
        │   ├── TokenCategory.CLEF
        │   ├── TokenCategory.TIME_SIGNATURE
        │   ├── TokenCategory.METER_SYMBOL
        │   └── TokenCategory.KEY_SIGNATURE
        ├── TokenCategory.ENGRAVED_SYMBOLS
        ├── TokenCategory.OTHER_CONTEXTUAL
        ├── TokenCategory.BARLINES
        ├── TokenCategory.COMMENTS
        │   ├── TokenCategory.FIELD_COMMENTS
        │   └── TokenCategory.LINE_COMMENTS
        ├── TokenCategory.DYNAMICS
        ├── TokenCategory.HARMONY
        ├── TokenCategory.FINGERING
        ├── TokenCategory.LYRICS
        ├── TokenCategory.INSTRUMENTS
        ├── TokenCategory.BOUNDING_BOXES
        └── TokenCategory.OTHER
    """
    return TokenCategoryHierarchyMapper.tree()

valid(*, include=None, exclude=None) classmethod

Get the valid categories based on the include and exclude sets.

Parameters:

Name Type Description Default
include Optional[Set[TokenCategory]]

The set of categories to include. Defaults to None. If None, all categories are included.

None
exclude Optional[Set[TokenCategory]]

The set of categories to exclude. Defaults to None. If None, no categories are excluded.

None

Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.

Source code in kernpy/core/tokens.py
172
173
174
175
176
177
178
179
180
181
182
183
184
185
@classmethod
def valid(cls, *, include: Optional[Set[TokenCategory]] = None, exclude: Optional[Set[TokenCategory]] = None) -> Set[TokenCategory]:
    """
    Get the valid categories based on the include and exclude sets.

    Args:
        include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
            If None, all categories are included.
        exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
            If None, no categories are excluded.

    Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.
    """
    return TokenCategoryHierarchyMapper.valid(include=include, exclude=exclude)

TokenCategoryHierarchyMapper

Mapping of the TokenCategory hierarchy.

This class is used to define the hierarchy of the TokenCategory. Useful related methods are provided.

Source code in kernpy/core/tokens.py
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
class TokenCategoryHierarchyMapper:
    """
    Mapping of the TokenCategory hierarchy.

    This class is used to define the hierarchy of the TokenCategory. Useful related methods are provided.
    """
    """
    The hierarchy of the TokenCategory is a recursive dictionary that defines the parent-child relationships \
        between the categories. It's a tree.
    """
    _hierarchy_typing = Dict[TokenCategory, '_hierarchy_typing']
    hierarchy: _hierarchy_typing = {
        TokenCategory.STRUCTURAL: {
            TokenCategory.HEADER: {},  # each leave must be an empty dictionary
            TokenCategory.SPINE_OPERATION: {},
        },
        TokenCategory.CORE: {
            TokenCategory.NOTE_REST: {
                TokenCategory.DURATION: {},
                TokenCategory.NOTE: {
                    TokenCategory.PITCH: {},
                    TokenCategory.DECORATION: {},
                    TokenCategory.ALTERATION: {},
                },
                TokenCategory.REST: {},
            },
            TokenCategory.CHORD: {},
            TokenCategory.EMPTY: {},
            TokenCategory.ERROR: {},
        },
        TokenCategory.SIGNATURES: {
            TokenCategory.CLEF: {},
            TokenCategory.TIME_SIGNATURE: {},
            TokenCategory.METER_SYMBOL: {},
            TokenCategory.KEY_SIGNATURE: {},
            TokenCategory.KEY_TOKEN: {},
        },
        TokenCategory.ENGRAVED_SYMBOLS: {},
        TokenCategory.OTHER_CONTEXTUAL: {},
        TokenCategory.BARLINES: {},
        TokenCategory.COMMENTS: {
            TokenCategory.FIELD_COMMENTS: {},
            TokenCategory.LINE_COMMENTS: {},
        },
        TokenCategory.DYNAMICS: {},
        TokenCategory.HARMONY: {},
        TokenCategory.FINGERING: {},
        TokenCategory.LYRICS: {},
        TokenCategory.INSTRUMENTS: {},
        TokenCategory.IMAGE_ANNOTATIONS: {
            TokenCategory.BOUNDING_BOXES: {},
            TokenCategory.LINE_BREAK: {},
        },
        TokenCategory.OTHER: {},
        TokenCategory.MHXM: {},
        TokenCategory.ROOT: {},
    }

    @classmethod
    def _is_child(cls, parent: TokenCategory, child: TokenCategory, *, tree: '_hierarchy_typing') -> bool:
        """
        Recursively check if `child` is in the subtree of `parent`.

        Args:
            parent (TokenCategory): The parent category.
            child (TokenCategory): The category to check.
            tree (_hierarchy_typing): The subtree to check.

        Returns:
            bool: True if `child` is a descendant of `parent`, False otherwise.
        """
        # Base case: the parent is empty.
        if len(tree.keys()) == 0:
            return False

        # Recursive case: explore the direct children of the parent.
        return any(
            direct_child == child or cls._is_child(direct_child, child, tree=tree[parent])
            for direct_child in tree.get(parent, {})
        )
        # Vectorized version of the following code:
        #direct_children = tree.get(parent, dict())
        #for direct_child in direct_children.keys():
        #    if direct_child == child or cls._is_child(direct_child, child, tree=tree[parent]):
        #        return True

    @classmethod
    def is_child(cls, parent: TokenCategory, child: TokenCategory) -> bool:
        """
        Recursively check if `child` is in the subtree of `parent`. If `parent` is the same as `child`, return True.

        Args:
            parent (TokenCategory): The parent category.
            child (TokenCategory): The category to check.

        Returns:
            bool: True if `child` is a descendant of `parent`, False otherwise.
        """
        if parent == child:
            return True
        return cls._is_child(parent, child, tree=cls.hierarchy)

    @classmethod
    def children(cls, parent: TokenCategory) -> Set[TokenCategory]:
        """
        Get the direct children of the parent category.

        Args:
            parent (TokenCategory): The parent category.

        Returns:
            Set[TokenCategory]: The list of children categories of the parent category.
        """
        return set(cls.hierarchy.get(parent, {}).keys())

    @classmethod
    def _nodes(cls, tree: _hierarchy_typing) -> Set[TokenCategory]:
        """
        Recursively get all nodes in the given hierarchy tree.
        """
        nodes = set(tree.keys())
        for child in tree.values():
            nodes.update(cls._nodes(child))
        return nodes

    @classmethod
    def _find_subtree(cls, tree: '_hierarchy_typing', parent: TokenCategory) -> Optional['_hierarchy_typing']:
        """
        Recursively find the subtree for the given parent category.
        """
        if parent in tree:
            return tree[parent]  # Return subtree if parent is found at this level
        for child, sub_tree in tree.items():
            result = cls._find_subtree(sub_tree, parent)
            if result is not None:
                return result
        return None  # Return None if parent is not found. It won't happer never


    @classmethod
    def nodes(cls, parent: TokenCategory) -> Set[TokenCategory]:
        """
        Get the all nodes of the subtree of the parent category.

        Args:
            parent (TokenCategory): The parent category.

        Returns:
            List[TokenCategory]: The list of nodes of the subtree of the parent category.
        """
        subtree = cls._find_subtree(cls.hierarchy, parent)
        return cls._nodes(subtree) if subtree is not None else set()

    @classmethod
    def valid(cls,
              include: Optional[Set[TokenCategory]] = None,
              exclude: Optional[Set[TokenCategory]] = None) -> Set[TokenCategory]:
        """
        Get the valid categories based on the include and exclude sets.

        Args:
            include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
                If None, all categories are included.
            exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
                If None, no categories are excluded.

        Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.
        """
        include = cls._validate_include(include)
        exclude = cls._validate_exclude(exclude)

        included_nodes = set.union(*[(cls.nodes(cat) | {cat}) for cat in include]) if len(include) > 0 else include
        excluded_nodes = set.union(*[(cls.nodes(cat) | {cat}) for cat in exclude]) if len(exclude) > 0 else exclude
        return included_nodes - excluded_nodes

    @classmethod
    def _leaves(cls, tree: '_hierarchy_typing') -> Set[TokenCategory]:
        """
        Recursively get all leaves (nodes without children) in the hierarchy tree.
        """
        if not tree:
            return set()
        leaves = {node for node, children in tree.items() if not children}
        for node, children in tree.items():
            leaves.update(cls._leaves(children))
        return leaves

    @classmethod
    def leaves(cls, target: TokenCategory) -> Set[TokenCategory]:
        """
        Get the leaves of the subtree of the target category.

        Args:
            target (TokenCategory): The target category.

        Returns (List[TokenCategory]): The list of leaf categories of the target category.
        """
        tree = cls._find_subtree(cls.hierarchy, target)
        return cls._leaves(tree)


    @classmethod
    def _match(cls, category: TokenCategory, *,
               include: Set[TokenCategory],
               exclude: Set[TokenCategory]) -> bool:
        """
        Check if a category matches include/exclude criteria.
        """
        # Include the category itself along with its descendants.
        target_nodes = cls.nodes(category) | {category}

        valid_categories = cls.valid(include=include, exclude=exclude)

        # Check if any node in the target set is in the valid categories.
        return len(target_nodes & valid_categories) > 0

    @classmethod
    def _validate_include(cls, include: Optional[Set[TokenCategory]]) -> Set[TokenCategory]:
        """
        Validate the include set.
        """
        if include is None:
            return cls.all()
        if isinstance(include, (list, tuple)):
            include = set(include)
        elif not isinstance(include, set):
            include = {include}
        if not all(isinstance(cat, TokenCategory) for cat in include):
            raise ValueError('Invalid category: include and exclude must be a set of TokenCategory.')
        return include

    @classmethod
    def _validate_exclude(cls, exclude: Optional[Set[TokenCategory]]) -> Set[TokenCategory]:
        """
        Validate the exclude set.
        """
        if exclude is None:
            return set()
        if isinstance(exclude, (list, tuple)):
            exclude = set(exclude)
        elif not isinstance(exclude, set):
            exclude = {exclude}
        if not all(isinstance(cat, TokenCategory) for cat in exclude):
            raise ValueError(f'Invalid category: category must be a {TokenCategory.__name__}.')
        return exclude


    @classmethod
    def match(cls, category: TokenCategory, *,
              include: Optional[Set[TokenCategory]] = None,
              exclude: Optional[Set[TokenCategory]] = None) -> bool:
        """
        Check if the category matches the include and exclude sets.
            If include is None, all categories are included. \
            If exclude is None, no categories are excluded.

        Args:
            category (TokenCategory): The category to check.
            include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
                If None, all categories are included.
            exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
                If None, no categories are excluded.

        Returns (bool): True if the category matches the include and exclude sets, False otherwise.

        Examples:
            >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST})
            True
            >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.REST})
            True
            >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.NOTE})
            False
            >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
            True
            >>> TokenCategoryHierarchyMapper.match(TokenCategory.DURATION, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
            False
        """
        include = cls._validate_include(include)
        exclude = cls._validate_exclude(exclude)

        return cls._match(category, include=include, exclude=exclude)

    @classmethod
    def all(cls) -> Set[TokenCategory]:
        """
        Get all categories in the hierarchy.

        Returns:
            Set[TokenCategory]: The set of all categories in the hierarchy.
        """
        return cls._nodes(cls.hierarchy)

    @classmethod
    def tree(cls) -> str:
        """
        Return a string representation of the category hierarchy,
        formatted similar to the output of the Unix 'tree' command.

        Example output:
            .
            ├── STRUCTURAL
            ├── CORE
            │   ├── NOTE_REST
            │   │   ├── DURATION
            │   │   ├── NOTE
            │   │   │   ├── PITCH
            │   │   │   └── DECORATION
            │   │   └── REST
            │   ├── CHORD
            │   └── EMPTY
            ├── SIGNATURES
            │   ├── CLEF
            │   ├── TIME_SIGNATURE
            │   ├── METER_SYMBOL
            │   └── KEY_SIGNATURE
            ├── ENGRAVED_SYMBOLS
            ├── OTHER_CONTEXTUAL
            ├── BARLINES
            ├── COMMENTS
            │   ├── FIELD_COMMENTS
            │   └── LINE_COMMENTS
            ├── DYNAMICS
            ├── HARMONY
            ...
        """
        def build_tree(tree: Dict[TokenCategory, '_hierarchy_typing'], prefix: str = "") -> [str]:
            lines_buffer = []
            items = list(tree.items())
            count = len(items)
            for index, (category, subtree) in enumerate(items):
                connector = "└── " if index == count - 1 else "├── "
                lines_buffer.append(prefix + connector + str(category))
                extension = "    " if index == count - 1 else "│   "
                lines_buffer.extend(build_tree(subtree, prefix + extension))
            return lines_buffer

        lines = ["."]
        lines.extend(build_tree(cls.hierarchy))
        return "\n".join(lines)

all() classmethod

Get all categories in the hierarchy.

Returns:

Type Description
Set[TokenCategory]

Set[TokenCategory]: The set of all categories in the hierarchy.

Source code in kernpy/core/tokens.py
539
540
541
542
543
544
545
546
547
@classmethod
def all(cls) -> Set[TokenCategory]:
    """
    Get all categories in the hierarchy.

    Returns:
        Set[TokenCategory]: The set of all categories in the hierarchy.
    """
    return cls._nodes(cls.hierarchy)

children(parent) classmethod

Get the direct children of the parent category.

Parameters:

Name Type Description Default
parent TokenCategory

The parent category.

required

Returns:

Type Description
Set[TokenCategory]

Set[TokenCategory]: The list of children categories of the parent category.

Source code in kernpy/core/tokens.py
359
360
361
362
363
364
365
366
367
368
369
370
@classmethod
def children(cls, parent: TokenCategory) -> Set[TokenCategory]:
    """
    Get the direct children of the parent category.

    Args:
        parent (TokenCategory): The parent category.

    Returns:
        Set[TokenCategory]: The list of children categories of the parent category.
    """
    return set(cls.hierarchy.get(parent, {}).keys())

is_child(parent, child) classmethod

Recursively check if child is in the subtree of parent. If parent is the same as child, return True.

Parameters:

Name Type Description Default
parent TokenCategory

The parent category.

required
child TokenCategory

The category to check.

required

Returns:

Name Type Description
bool bool

True if child is a descendant of parent, False otherwise.

Source code in kernpy/core/tokens.py
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
@classmethod
def is_child(cls, parent: TokenCategory, child: TokenCategory) -> bool:
    """
    Recursively check if `child` is in the subtree of `parent`. If `parent` is the same as `child`, return True.

    Args:
        parent (TokenCategory): The parent category.
        child (TokenCategory): The category to check.

    Returns:
        bool: True if `child` is a descendant of `parent`, False otherwise.
    """
    if parent == child:
        return True
    return cls._is_child(parent, child, tree=cls.hierarchy)

leaves(target) classmethod

Get the leaves of the subtree of the target category.

Parameters:

Name Type Description Default
target TokenCategory

The target category.

required

Returns (List[TokenCategory]): The list of leaf categories of the target category.

Source code in kernpy/core/tokens.py
444
445
446
447
448
449
450
451
452
453
454
455
@classmethod
def leaves(cls, target: TokenCategory) -> Set[TokenCategory]:
    """
    Get the leaves of the subtree of the target category.

    Args:
        target (TokenCategory): The target category.

    Returns (List[TokenCategory]): The list of leaf categories of the target category.
    """
    tree = cls._find_subtree(cls.hierarchy, target)
    return cls._leaves(tree)

match(category, *, include=None, exclude=None) classmethod

Check if the category matches the include and exclude sets. If include is None, all categories are included. If exclude is None, no categories are excluded.

Parameters:

Name Type Description Default
category TokenCategory

The category to check.

required
include Optional[Set[TokenCategory]]

The set of categories to include. Defaults to None. If None, all categories are included.

None
exclude Optional[Set[TokenCategory]]

The set of categories to exclude. Defaults to None. If None, no categories are excluded.

None

Returns (bool): True if the category matches the include and exclude sets, False otherwise.

Examples:

>>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST})
True
>>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.REST})
True
>>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.NOTE})
False
>>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
True
>>> TokenCategoryHierarchyMapper.match(TokenCategory.DURATION, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
False
Source code in kernpy/core/tokens.py
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
@classmethod
def match(cls, category: TokenCategory, *,
          include: Optional[Set[TokenCategory]] = None,
          exclude: Optional[Set[TokenCategory]] = None) -> bool:
    """
    Check if the category matches the include and exclude sets.
        If include is None, all categories are included. \
        If exclude is None, no categories are excluded.

    Args:
        category (TokenCategory): The category to check.
        include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
            If None, all categories are included.
        exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
            If None, no categories are excluded.

    Returns (bool): True if the category matches the include and exclude sets, False otherwise.

    Examples:
        >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST})
        True
        >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.REST})
        True
        >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.NOTE})
        False
        >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
        True
        >>> TokenCategoryHierarchyMapper.match(TokenCategory.DURATION, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
        False
    """
    include = cls._validate_include(include)
    exclude = cls._validate_exclude(exclude)

    return cls._match(category, include=include, exclude=exclude)

nodes(parent) classmethod

Get the all nodes of the subtree of the parent category.

Parameters:

Name Type Description Default
parent TokenCategory

The parent category.

required

Returns:

Type Description
Set[TokenCategory]

List[TokenCategory]: The list of nodes of the subtree of the parent category.

Source code in kernpy/core/tokens.py
396
397
398
399
400
401
402
403
404
405
406
407
408
@classmethod
def nodes(cls, parent: TokenCategory) -> Set[TokenCategory]:
    """
    Get the all nodes of the subtree of the parent category.

    Args:
        parent (TokenCategory): The parent category.

    Returns:
        List[TokenCategory]: The list of nodes of the subtree of the parent category.
    """
    subtree = cls._find_subtree(cls.hierarchy, parent)
    return cls._nodes(subtree) if subtree is not None else set()

tree() classmethod

Return a string representation of the category hierarchy, formatted similar to the output of the Unix 'tree' command.

Example output

. ├── STRUCTURAL ├── CORE │ ├── NOTE_REST │ │ ├── DURATION │ │ ├── NOTE │ │ │ ├── PITCH │ │ │ └── DECORATION │ │ └── REST │ ├── CHORD │ └── EMPTY ├── SIGNATURES │ ├── CLEF │ ├── TIME_SIGNATURE │ ├── METER_SYMBOL │ └── KEY_SIGNATURE ├── ENGRAVED_SYMBOLS ├── OTHER_CONTEXTUAL ├── BARLINES ├── COMMENTS │ ├── FIELD_COMMENTS │ └── LINE_COMMENTS ├── DYNAMICS ├── HARMONY ...

Source code in kernpy/core/tokens.py
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
@classmethod
def tree(cls) -> str:
    """
    Return a string representation of the category hierarchy,
    formatted similar to the output of the Unix 'tree' command.

    Example output:
        .
        ├── STRUCTURAL
        ├── CORE
        │   ├── NOTE_REST
        │   │   ├── DURATION
        │   │   ├── NOTE
        │   │   │   ├── PITCH
        │   │   │   └── DECORATION
        │   │   └── REST
        │   ├── CHORD
        │   └── EMPTY
        ├── SIGNATURES
        │   ├── CLEF
        │   ├── TIME_SIGNATURE
        │   ├── METER_SYMBOL
        │   └── KEY_SIGNATURE
        ├── ENGRAVED_SYMBOLS
        ├── OTHER_CONTEXTUAL
        ├── BARLINES
        ├── COMMENTS
        │   ├── FIELD_COMMENTS
        │   └── LINE_COMMENTS
        ├── DYNAMICS
        ├── HARMONY
        ...
    """
    def build_tree(tree: Dict[TokenCategory, '_hierarchy_typing'], prefix: str = "") -> [str]:
        lines_buffer = []
        items = list(tree.items())
        count = len(items)
        for index, (category, subtree) in enumerate(items):
            connector = "└── " if index == count - 1 else "├── "
            lines_buffer.append(prefix + connector + str(category))
            extension = "    " if index == count - 1 else "│   "
            lines_buffer.extend(build_tree(subtree, prefix + extension))
        return lines_buffer

    lines = ["."]
    lines.extend(build_tree(cls.hierarchy))
    return "\n".join(lines)

valid(include=None, exclude=None) classmethod

Get the valid categories based on the include and exclude sets.

Parameters:

Name Type Description Default
include Optional[Set[TokenCategory]]

The set of categories to include. Defaults to None. If None, all categories are included.

None
exclude Optional[Set[TokenCategory]]

The set of categories to exclude. Defaults to None. If None, no categories are excluded.

None

Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.

Source code in kernpy/core/tokens.py
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
@classmethod
def valid(cls,
          include: Optional[Set[TokenCategory]] = None,
          exclude: Optional[Set[TokenCategory]] = None) -> Set[TokenCategory]:
    """
    Get the valid categories based on the include and exclude sets.

    Args:
        include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
            If None, all categories are included.
        exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
            If None, no categories are excluded.

    Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.
    """
    include = cls._validate_include(include)
    exclude = cls._validate_exclude(exclude)

    included_nodes = set.union(*[(cls.nodes(cat) | {cat}) for cat in include]) if len(include) > 0 else include
    excluded_nodes = set.union(*[(cls.nodes(cat) | {cat}) for cat in exclude]) if len(exclude) > 0 else exclude
    return included_nodes - excluded_nodes

Tokenizer

Bases: ABC

Tokenizer interface. All tokenizers must implement this interface.

Tokenizers are responsible for converting a token into a string representation.

Source code in kernpy/core/tokenizers.py
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
class Tokenizer(ABC):
    """
    Tokenizer interface. All tokenizers must implement this interface.

    Tokenizers are responsible for converting a token into a string representation.
    """
    def __init__(self, *, token_categories: Set['TokenCategory']):
        """
        Create a new Tokenizer.

        Args:
            token_categories Set[TokenCategory]: List of categories to be tokenized.
                If None, an exception will be raised.
        """
        if token_categories is None:
            raise ValueError('Categories must be provided. Found None.')

        self.token_categories = token_categories


    @abstractmethod
    def tokenize(self, token: Token) -> str:
        """
        Tokenize a token into a string representation.

        Args:
            token (Token): Token to be tokenized.

        Returns (str): Tokenized string representation.

        """
        pass

__init__(*, token_categories)

Create a new Tokenizer.

Parameters:

Name Type Description Default
token_categories Set[TokenCategory]

List of categories to be tokenized. If None, an exception will be raised.

required
Source code in kernpy/core/tokenizers.py
54
55
56
57
58
59
60
61
62
63
64
65
def __init__(self, *, token_categories: Set['TokenCategory']):
    """
    Create a new Tokenizer.

    Args:
        token_categories Set[TokenCategory]: List of categories to be tokenized.
            If None, an exception will be raised.
    """
    if token_categories is None:
        raise ValueError('Categories must be provided. Found None.')

    self.token_categories = token_categories

tokenize(token) abstractmethod

Tokenize a token into a string representation.

Parameters:

Name Type Description Default
token Token

Token to be tokenized.

required

Returns (str): Tokenized string representation.

Source code in kernpy/core/tokenizers.py
68
69
70
71
72
73
74
75
76
77
78
79
@abstractmethod
def tokenize(self, token: Token) -> str:
    """
    Tokenize a token into a string representation.

    Args:
        token (Token): Token to be tokenized.

    Returns (str): Tokenized string representation.

    """
    pass

agnostic_distance(first_pitch, second_pitch)

Calculate the distance in semitones between two pitches.

Parameters:

Name Type Description Default
first_pitch AgnosticPitch

The first pitch to compare.

required
second_pitch AgnosticPitch

The second pitch to compare.

required

Returns:

Name Type Description
int int

The distance in semitones between the two pitches.

Examples:

>>> agnostic_distance(AgnosticPitch('C4'), AgnosticPitch('E4'))
4
>>> agnostic_distance(AgnosticPitch('C4'), AgnosticPitch('B3'))
-1
Source code in kernpy/core/transposer.py
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
def agnostic_distance(
    first_pitch: AgnosticPitch,
    second_pitch: AgnosticPitch,
) -> int:
    """
    Calculate the distance in semitones between two pitches.

    Args:
        first_pitch (AgnosticPitch): The first pitch to compare.
        second_pitch (AgnosticPitch): The second pitch to compare.

    Returns:
        int: The distance in semitones between the two pitches.

    Examples:
        >>> agnostic_distance(AgnosticPitch('C4'), AgnosticPitch('E4'))
        4
        >>> agnostic_distance(AgnosticPitch('C4'), AgnosticPitch('B3'))
        -1
    """
    def semitone_index(p: AgnosticPitch) -> int:
        # base letter:
        letter = p.name.replace('+', '').replace('-', '')
        base = LETTER_TO_SEMITONES[letter]
        # accidentals: '+' is one sharp, '-' one flat
        alteration = p.name.count('+') - p.name.count('-')
        return p.octave * 12 + base + alteration

    return semitone_index(second_pitch) - semitone_index(first_pitch)

concat(contents, *, separator='\n')

Concatenate multiple **kern fragments into a single Document object.      All the fragments should be presented in order. Each fragment does not need to be a complete **kern file. 
Warnings:
    Processing a large number of files in a row may take some time.
     This method performs as many `kp.read` operations as there are fragments to concatenate.
Args:
    contents (Sequence[str]): List of **kern strings
    separator (Optional[str]): Separator string to separate the **kern fragments. Default is '

' (newline).

Returns (Tuple[Document, List[Tuple[int, int]]]): Document object and       and a List of Pairs (Tuple[int, int]) representing the measure fragment indexes of the concatenated document.

Examples:
    >>> import kernpy as kp
    >>> contents = ['**kern

4e 4f 4g - ', '4a 4b 4c - = ', '4d 4e 4f *- '] >>> document, indexes = kp.concat(contents) >>> indexes [(0, 3), (3, 6), (6, 9)] >>> document, indexes = kp.concat(contents, separator=' ') >>> indexes [(0, 3), (3, 6), (6, 9)] >>> document, indexes = kp.concat(contents, separator='') >>> indexes [(0, 3), (3, 6), (6, 9)] >>> for start, end in indexes: >>> print(kp.dumps(document, from_measure=start, to_measure=end)))

Source code in kernpy/io/public.py
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
def concat(
        contents: List[str],
        *,
        separator: Optional[str] = '\n',
) -> Tuple[Document, List[Tuple[int, int]]]:
    """
    Concatenate multiple **kern fragments into a single Document object. \
     All the fragments should be presented in order. Each fragment does not need to be a complete **kern file. \

    Warnings:
        Processing a large number of files in a row may take some time.
         This method performs as many `kp.read` operations as there are fragments to concatenate.
    Args:
        contents (Sequence[str]): List of **kern strings
        separator (Optional[str]): Separator string to separate the **kern fragments. Default is '\n' (newline).

    Returns (Tuple[Document, List[Tuple[int, int]]]): Document object and \
      and a List of Pairs (Tuple[int, int]) representing the measure fragment indexes of the concatenated document.

    Examples:
        >>> import kernpy as kp
        >>> contents = ['**kern\n4e\n4f\n4g\n*-\n', '4a\n4b\n4c\n*-\n=\n', '4d\n4e\n4f\n*-\n']
        >>> document, indexes = kp.concat(contents)
        >>> indexes
        [(0, 3), (3, 6), (6, 9)]
        >>> document, indexes = kp.concat(contents, separator='\n')
        >>> indexes
        [(0, 3), (3, 6), (6, 9)]
        >>> document, indexes = kp.concat(contents, separator='')
        >>> indexes
        [(0, 3), (3, 6), (6, 9)]
        >>> for start, end in indexes:
        >>>     print(kp.dumps(document, from_measure=start, to_measure=end)))
    """
    return generic.Generic.concat(
        contents=contents,
        separator=separator,
    )

create(content, strict=False)

Create a Document object from a string encoded in Humdrum **kern format.

Args:
    content: String encoded in Humdrum **kern format
    strict: If True, raise an error if the **kern file has any errors. Otherwise, return a list of errors.

Returns (Document, list): Document object and list of error messages. Empty list if no errors.

Examples:
    >>> import kernpy as kp
    >>> document, errors = kp.create('**kern

4e 4f 4g - ') >>> if len(errors) > 0: >>> print(errors) ['Error: Invalid kern spine: 1', 'Error: Invalid *kern spine: 2']

Source code in kernpy/core/generic.py
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
@deprecated("Use 'loads' instead.")
def create(
        content: str,
        strict=False
) -> (Document, []):
    """
    Create a Document object from a string encoded in Humdrum **kern format.

    Args:
        content: String encoded in Humdrum **kern format
        strict: If True, raise an error if the **kern file has any errors. Otherwise, return a list of errors.

    Returns (Document, list): Document object and list of error messages. Empty list if no errors.

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.create('**kern\n4e\n4f\n4g\n*-\n')
        >>> if len(errors) > 0:
        >>>     print(errors)
        ['Error: Invalid **kern spine: 1', 'Error: Invalid **kern spine: 2']
    """
    return Generic.create(
        content=content,
        strict=strict
    )

distance(first_encoding, second_encoding, *, first_format=NotationEncoding.HUMDRUM.value, second_format=NotationEncoding.HUMDRUM.value)

Calculate the distance in semitones between two pitches.

Parameters:

Name Type Description Default
first_encoding str

The first pitch to compare.

required
second_encoding str

The second pitch to compare.

required
first_format str

The encoding format of the first pitch. Default is HUMDRUM.

HUMDRUM.value
second_format str

The encoding format of the second pitch. Default is HUMDRUM.

HUMDRUM.value

Returns:

Name Type Description
int int

The distance in semitones between the two pitches.

Examples:

>>> distance('C4', 'E4')
4
>>> distance('C4', 'B3')
-1
Source code in kernpy/core/transposer.py
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def distance(
    first_encoding: str,
    second_encoding: str,
    *,
    first_format: str = NotationEncoding.HUMDRUM.value,
    second_format: str = NotationEncoding.HUMDRUM.value,
) -> int:
    """
    Calculate the distance in semitones between two pitches.

    Args:
        first_encoding (str): The first pitch to compare.
        second_encoding (str): The second pitch to compare.
        first_format (str): The encoding format of the first pitch. Default is HUMDRUM.
        second_format (str): The encoding format of the second pitch. Default is HUMDRUM.

    Returns:
        int: The distance in semitones between the two pitches.

    Examples:
        >>> distance('C4', 'E4')
        4
        >>> distance('C4', 'B3')
        -1
    """
    first_importer = PitchImporterFactory.create(first_format)
    first_pitch: AgnosticPitch = first_importer.import_pitch(first_encoding)

    second_importer = PitchImporterFactory.create(second_format)
    second_pitch: AgnosticPitch = second_importer.import_pitch(second_encoding)

    return agnostic_distance(first_pitch, second_pitch)

download_polish_scores(input_directory, output_directory, remove_empty_directories=True, kern_spines_filter=2, exporter_kern_type='ekern')

Process the files in the input_directory and save the results in the output_directory. http requests are made to download the images.

Parameters:

Name Type Description Default
input_directory str

directory where the input files are found

required
output_directory str

directory where the output files are saved

required
remove_empty_directories Optional[bool]

remove empty directories when finish processing the files

True
kern_spines_filter Optional[int]

Only process files with the number of **kern spines specified. Use it to export 2-voice files. Default is 2. Use None to process all files.

2
exporter_kern_type Optional[str]

the type of kern exporter. It can be 'krn' or 'ekrn'

'ekern'

Returns:

Type Description
None

None

Examples:

>>> main('/kern_files', '/output_ekern')
None
>>> main('/kern_files', '/output_ekern', remove_empty_directories=False)
None
>>> main('/kern_files', '/output_ekern', kern_spines_filter=2, remove_empty_directories=False)
None
>>> main('/kern_files', '/output_ekern', kern_spines_filter=None, remove_empty_directories=False)
None
>>> main('/kern_files', '/output_ekern', exporter_kern_type='krn', remove_empty_directories=True)
None
>>> main('/kern_files', '/output_ekern', exporter_kern_type='ekrn', remove_empty_directories=True, kern_spines_filter=2)
None
Source code in kernpy/polish_scores/download_polish_dataset.py
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
def main(
        input_directory: str,
        output_directory: str,
        remove_empty_directories: Optional[bool] = True,
        kern_spines_filter: Optional[int] = 2,
        exporter_kern_type: Optional[str] = 'ekern'
) -> None:
    """
    Process the files in the input_directory and save the results in the output_directory.
    http requests are made to download the images.

    Args:
        input_directory (str): directory where the input files are found
        output_directory (str): directory where the output files are saved
        remove_empty_directories (Optional[bool]): remove empty directories when finish processing the files
        kern_spines_filter (Optional[int]): Only process files with the number of **kern spines specified.\
            Use it to export 2-voice files. Default is 2.\
            Use None to process all files.
        exporter_kern_type (Optional[str]): the type of kern exporter. It can be 'krn' or 'ekrn'



    Returns:
        None

    Examples:
        >>> main('/kern_files', '/output_ekern')
        None

        >>> main('/kern_files', '/output_ekern', remove_empty_directories=False)
        None

        >>> main('/kern_files', '/output_ekern', kern_spines_filter=2, remove_empty_directories=False)
        None

        >>> main('/kern_files', '/output_ekern', kern_spines_filter=None, remove_empty_directories=False)
        None

        >>> main('/kern_files', '/output_ekern', exporter_kern_type='krn', remove_empty_directories=True)
        None

        >>> main('/kern_files', '/output_ekern', exporter_kern_type='ekrn', remove_empty_directories=True, kern_spines_filter=2)
        None

    """
    print(f'Processing files in {input_directory} and saving to {output_directory}')
    kern_with_bboxes = search_files_with_string(input_directory, 'xywh')
    ok_files = []
    ko_files = []
    log_file = os.path.join(output_directory, LOG_FILENAME)
    print(f"{25*'='}"
          f"\nProcessing {len(kern_with_bboxes)} files."
          f"\nLog will be saved in {log_file}."
          f"\n{25*'='}")
    for kern in kern_with_bboxes:
        try:
            filename = remove_extension(kern)
            kern_path = os.path.join(input_directory, kern)
            output_kern_path = os.path.join(output_directory, filename)
            if not os.path.exists(output_kern_path):
                os.makedirs(output_kern_path)
            convert_and_download_file(kern_path, output_kern_path, log_filename=log_file, kern_spines_filter=kern_spines_filter, exporter_kern_type=exporter_kern_type)
            ok_files.append(kern)
        except Exception as error:
            ko_files.append(kern)
            print(f'Errors in {kern}: {error}')
            store_error_log(os.path.join(output_directory, 'errors.json'), {'kern': kern, 'error': str(error)})

    if remove_empty_directories:
        remove_empty_dirs(output_directory)

    print(f'----> OK files #{len(ok_files)}')
    print(f'----> KO files #{len(ko_files)}')
    print(ko_files)

dump(document, fp, *, spine_types=None, include=None, exclude=None, from_measure=None, to_measure=None, encoding=None, instruments=None, show_measure_numbers=None, spine_ids=None)

Parameters:

Name Type Description Default
document Document

The Document object to write to the file.

required
fp Union[str, Path]

The file path to write the Document object.

required
spine_types Iterable

kern, mens, etc...

None
include Iterable

The token categories to include in the exported file. When None, all the token categories will be exported.

None
exclude Iterable

The token categories to exclude from the exported file. When None, no token categories will be excluded.

None
from_measure int

The measure to start exporting. When None, the exporter will start from the beginning of the file. The first measure is 1

None
to_measure int

The measure to end exporting. When None, the exporter will end at the end of the file.

None
encoding Encoding

The type of the **kern file to export.

None
instruments Iterable

The instruments to export. If None, all the instruments will be exported.

None
show_measure_numbers Bool

Show the measure numbers in the exported file.

None
spine_ids Iterable

The ids of the spines to export. When None, all the spines will be exported. Spines ids start from 0, and they are increased by 1 for each spine to the right.

None

Returns (None): None

Raises:

Type Description
ValueError

If the document could not be exported.

Examples:

>>> import kernpy as kp
>>> document, errors = kp.load('BWV565.krn')
>>> kp.dump(document, 'BWV565_normalized.krn')
None
>>> # File 'BWV565_normalized.krn' will be created with the normalized **kern representation.
Source code in kernpy/io/public.py
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
def dump(document: Document, fp: Union[str, Path], *,
         spine_types: [str] = None,
         include: [TokenCategory] = None,
         exclude: [TokenCategory] = None,
         from_measure: int = None,
         to_measure: int = None,
         encoding: Encoding = None,
         instruments: [str] = None,
         show_measure_numbers: bool = None,
         spine_ids: [int] = None
         ) -> None:
    """

    Args:
        document (Document): The Document object to write to the file.
        fp (Union[str, Path]): The file path to write the Document object.
        spine_types (Iterable): **kern, **mens, etc...
        include (Iterable): The token categories to include in the exported file. When None, all the token categories will be exported.
        exclude (Iterable): The token categories to exclude from the exported file. When None, no token categories will be excluded.
        from_measure (int): The measure to start exporting. When None, the exporter will start from the beginning of the file. The first measure is 1
        to_measure (int): The measure to end exporting. When None, the exporter will end at the end of the file.
        encoding (Encoding): The type of the **kern file to export.
        instruments (Iterable): The instruments to export. If None, all the instruments will be exported.
        show_measure_numbers (Bool): Show the measure numbers in the exported file.
        spine_ids (Iterable): The ids of the spines to export. When None, all the spines will be exported. \
            Spines ids start from 0, and they are increased by 1 for each spine to the right.


    Returns (None): None

    Raises:
        ValueError: If the document could not be exported.

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.load('BWV565.krn')
        >>> kp.dump(document, 'BWV565_normalized.krn')
        None
        >>> # File 'BWV565_normalized.krn' will be created with the normalized **kern representation.
    """
    # Create an ExportOptions instance with only user-modified arguments
    options = generic.Generic.parse_options_to_ExportOptions(
        spine_types=spine_types,
        include=include,
        exclude=exclude,
        from_measure=from_measure,
        to_measure=to_measure,
        kern_type=encoding,
        instruments=instruments,
        show_measure_numbers=show_measure_numbers,
        spine_ids=spine_ids
    )

    return generic.Generic.store(
        document=document,
        path=fp,
        options=options
    )

dumps(document, *, spine_types=None, include=None, exclude=None, from_measure=None, to_measure=None, encoding=None, instruments=None, show_measure_numbers=None, spine_ids=None)

Args:
    document (Document): The Document object to write to the file.
    fp (Union[str, Path]): The file path to write the Document object.
    spine_types (Iterable): **kern, **mens, etc...
    include (Iterable): The token categories to include in the exported file. When None, all the token categories will be exported.
    exclude (Iterable): The token categories to exclude from the exported file. When None, no token categories will be excluded.
    from_measure (int): The measure to start exporting. When None, the exporter will start from the beginning of the file. The first measure is 1
    to_measure (int): The measure to end exporting. When None, the exporter will end at the end of the file.
    encoding (Encoding): The type of the **kern file to export.
    instruments (Iterable): The instruments to export. If None, all the instruments will be exported.
    show_measure_numbers (Bool): Show the measure numbers in the exported file.
    spine_ids (Iterable): The ids of the spines to export. When None, all the spines will be exported.             Spines ids start from 0, and they are increased by 1 for each spine to the right.


Returns (None): None

Raises:
    ValueError: If the document could not be exported.

Examples:
    >>> import kernpy as kp
    >>> document, errors = kp.load('score.krn')
    >>> kp.dumps(document)
    '**kern

clefG2 =1 4c 4d 4e 4f -'

Source code in kernpy/io/public.py
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
def dumps(document: Document, *,
          spine_types: [str] = None,
          include: [TokenCategory] = None,
          exclude: [TokenCategory] = None,
          from_measure: int = None,
          to_measure: int = None,
          encoding: Encoding = None,
          instruments: [str] = None,
          show_measure_numbers: bool = None,
          spine_ids: [int] = None
          ) -> str:
    """

    Args:
        document (Document): The Document object to write to the file.
        fp (Union[str, Path]): The file path to write the Document object.
        spine_types (Iterable): **kern, **mens, etc...
        include (Iterable): The token categories to include in the exported file. When None, all the token categories will be exported.
        exclude (Iterable): The token categories to exclude from the exported file. When None, no token categories will be excluded.
        from_measure (int): The measure to start exporting. When None, the exporter will start from the beginning of the file. The first measure is 1
        to_measure (int): The measure to end exporting. When None, the exporter will end at the end of the file.
        encoding (Encoding): The type of the **kern file to export.
        instruments (Iterable): The instruments to export. If None, all the instruments will be exported.
        show_measure_numbers (Bool): Show the measure numbers in the exported file.
        spine_ids (Iterable): The ids of the spines to export. When None, all the spines will be exported. \
            Spines ids start from 0, and they are increased by 1 for each spine to the right.


    Returns (None): None

    Raises:
        ValueError: If the document could not be exported.

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.load('score.krn')
        >>> kp.dumps(document)
        '**kern\n*clefG2\n=1\n4c\n4d\n4e\n4f\n*-'
    """
    # Create an ExportOptions instance with only user-modified arguments
    options = generic.Generic.parse_options_to_ExportOptions(
        spine_types=spine_types,
        include=include,
        exclude=exclude,
        from_measure=from_measure,
        to_measure=to_measure,
        kern_type=encoding,
        instruments=instruments,
        show_measure_numbers=show_measure_numbers,
        spine_ids=spine_ids
    )

    return generic.Generic.export(
        document=document,
        options=options
    )

ekern_to_krn(input_file, output_file)

Convert one .ekrn file to .krn file.

Parameters:

Name Type Description Default
input_file str

Filepath to the input **ekern

required
output_file str

Filepath to the output **kern

required

Returns: None

Example

Convert .ekrn to .krn

ekern_to_krn('path/to/file.ekrn', 'path/to/file.krn')

Convert a list of .ekrn files to .krn files

ekrn_files = your_modue.get_files()

# Use the wrapper to avoid stopping the process if an error occurs
def ekern_to_krn_wrapper(ekern_file, kern_file):
    try:
        ekern_to_krn(ekrn_files, output_folder)
    except Exception as e:
        print(f'Error:{e}')

# Convert all the files
for ekern_file in ekrn_files:
    output_file = ekern_file.replace('.ekrn', '.krn')
    ekern_to_krn_wrapper(ekern_file, output_file)
Source code in kernpy/core/exporter.py
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def ekern_to_krn(
        input_file: str,
        output_file: str
) -> None:
    """
    Convert one .ekrn file to .krn file.

    Args:
        input_file (str): Filepath to the input **ekern
        output_file (str): Filepath to the output **kern
    Returns:
        None

    Example:
        # Convert .ekrn to .krn
        >>> ekern_to_krn('path/to/file.ekrn', 'path/to/file.krn')

        # Convert a list of .ekrn files to .krn files
        ```python
        ekrn_files = your_modue.get_files()

        # Use the wrapper to avoid stopping the process if an error occurs
        def ekern_to_krn_wrapper(ekern_file, kern_file):
            try:
                ekern_to_krn(ekrn_files, output_folder)
            except Exception as e:
                print(f'Error:{e}')

        # Convert all the files
        for ekern_file in ekrn_files:
            output_file = ekern_file.replace('.ekrn', '.krn')
            ekern_to_krn_wrapper(ekern_file, output_file)
        ```
    """
    with open(input_file, 'r') as file:
        content = file.read()

    kern_content = get_kern_from_ekern(content)

    with open(output_file, 'w') as file:
        file.write(kern_content)

export(document, options)

Export a Document object to a string.

Parameters:

Name Type Description Default
document Document

Document object to export

required
options ExportOptions

Export options

required

Returns: Exported string

Examples:

>>> import kernpy as kp
>>> document, errors = kp.read('path/to/file.krn')
>>> options = kp.ExportOptions()
>>> content = kp.export(document, options)
Source code in kernpy/core/generic.py
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
@deprecated("Use 'dumps' instead.")
def export(
        document: Document,
        options: ExportOptions
) -> str:
    """
    Export a Document object to a string.

    Args:
        document: Document object to export
        options: Export options

    Returns: Exported string

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.read('path/to/file.krn')
        >>> options = kp.ExportOptions()
        >>> content = kp.export(document, options)
    """
    return Generic.export(
        document=document,
        options=options
    )

get_kern_from_ekern(ekern_content)

Read the content of a ekern file and return the kern content.

Parameters:

Name Type Description Default
ekern_content str

The content of the **ekern file.

required

Returns: The content of the **kern file.

Example
# Read **ekern file
ekern_file = 'path/to/file.ekrn'
with open(ekern_file, 'r') as file:
    ekern_content = file.read()

# Get **kern content
kern_content = get_kern_from_ekern(ekern_content)
with open('path/to/file.krn', 'w') as file:
    file.write(kern_content)

Source code in kernpy/core/exporter.py
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
def get_kern_from_ekern(ekern_content: str) -> str:
    """
    Read the content of a **ekern file and return the **kern content.

    Args:
        ekern_content: The content of the **ekern file.
    Returns:
        The content of the **kern file.

    Example:
        ```python
        # Read **ekern file
        ekern_file = 'path/to/file.ekrn'
        with open(ekern_file, 'r') as file:
            ekern_content = file.read()

        # Get **kern content
        kern_content = get_kern_from_ekern(ekern_content)
        with open('path/to/file.krn', 'w') as file:
            file.write(kern_content)

        ```
    """
    content = ekern_content.replace("**ekern", "**kern")  # TODO Constante según las cabeceras
    content = content.replace(TOKEN_SEPARATOR, "")
    content = content.replace(DECORATION_SEPARATOR, "")

    return content

get_spine_types(document, spine_types=None)

Get the spines of a Document object.

Parameters:

Name Type Description Default
document Document

Document object to get spines from

required
spine_types Optional[Sequence[str]]

List of spine types to get. If None, all spines are returned.

None

Returns (List[str]): List of spines

Examples:

>>> import kernpy as kp
>>> document, _ = kp.read('path/to/file.krn')
>>> kp.get_spine_types(document)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> kp.get_spine_types(document, None)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> kp.get_spine_types(document, ['**kern'])
['**kern', '**kern', '**kern', '**kern']
>>> kp.get_spine_types(document, ['**kern', '**root'])
['**kern', '**kern', '**kern', '**kern', '**root']
>>> kp.get_spine_types(document, ['**kern', '**root', '**harm'])
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> kp.get_spine_types(document, [])
[]
Source code in kernpy/core/generic.py
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
@deprecated("Use 'spine_types' instead.")
def get_spine_types(
        document: Document,
        spine_types: Optional[Sequence[str]] = None
) -> List[str]:
    """
    Get the spines of a Document object.

    Args:
        document (Document): Document object to get spines from
        spine_types (Optional[Sequence[str]]): List of spine types to get. If None, all spines are returned.

    Returns (List[str]): List of spines

    Examples:
        >>> import kernpy as kp
        >>> document, _ = kp.read('path/to/file.krn')
        >>> kp.get_spine_types(document)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> kp.get_spine_types(document, None)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> kp.get_spine_types(document, ['**kern'])
        ['**kern', '**kern', '**kern', '**kern']
        >>> kp.get_spine_types(document, ['**kern', '**root'])
        ['**kern', '**kern', '**kern', '**kern', '**root']
        >>> kp.get_spine_types(document, ['**kern', '**root', '**harm'])
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> kp.get_spine_types(document, [])
        []
    """
    return Generic.get_spine_types(
        document=document,
        spine_types=spine_types
    )

graph(document, fp)

Create a graph representation of a Document object using Graphviz. Save the graph as a .dot file or indicate the output file path or stream. If the output file path is None, the function will return the graphviz content as a string to the standard output.

Use the Graphviz software to convert the .dot file to an image.

Parameters:

Name Type Description Default
document Document

The Document object to export as a graphviz file.

required
fp Optional[Union[str, Path]]

The file path to write the graphviz file. If None, the function will return the graphviz content as a string to the standard output.

required

Returns (None): None

Examples:

>>> import kernpy as kp
>>> document, errors = kp.load('score.krn')
>>> kp.graph(document, 'score.dot')
None
>>> # File 'score.dot' will be created with the graphviz representation of the Document object.
>>> kp.graph(document, None)
'digraph G { ... }'
Source code in kernpy/io/public.py
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
def graph(document: Document, fp: Optional[Union[str, Path]]) -> None:
    """
    Create a graph representation of a Document object using Graphviz. Save the graph as a .dot file or indicate the\
     output file path or stream. If the output file path is None, the function will return the graphviz content as a\
        string to the standard output.

    Use the Graphviz software to convert the .dot file to an image.


    Args:
        document (Document): The Document object to export as a graphviz file.
        fp (Optional[Union[str, Path]]): The file path to write the graphviz file. If None, the function will return the\
            graphviz content as a string to the standard output.

    Returns (None): None

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.load('score.krn')
        >>> kp.graph(document, 'score.dot')
        None
        >>> # File 'score.dot' will be created with the graphviz representation of the Document object.
        >>> kp.graph(document, None)
        'digraph G { ... }'
    """
    return generic.Generic.store_graph(
        document=document,
        path=fp
    )

kern_to_ekern(input_file, output_file)

Convert one .krn file to .ekrn file

Parameters:

Name Type Description Default
input_file str

Filepath to the input **kern

required
output_file str

Filepath to the output **ekern

required

Returns:

Type Description
None

None

Example

Convert .krn to .ekrn

kern_to_ekern('path/to/file.krn', 'path/to/file.ekrn')

Convert a list of .krn files to .ekrn files

krn_files = your_module.get_files()

# Use the wrapper to avoid stopping the process if an error occurs
def kern_to_ekern_wrapper(krn_file, ekern_file):
    try:
        kern_to_ekern(krn_file, ekern_file)
    except Exception as e:
        print(f'Error:{e}')

# Convert all the files
for krn_file in krn_files:
    output_file = krn_file.replace('.krn', '.ekrn')
    kern_to_ekern_wrapper(krn_file, output_file)
Source code in kernpy/core/exporter.py
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
def kern_to_ekern(
        input_file: str,
        output_file: str
) -> None:
    """
    Convert one .krn file to .ekrn file

    Args:
        input_file (str): Filepath to the input **kern
        output_file (str): Filepath to the output **ekern

    Returns:
        None

    Example:
        # Convert .krn to .ekrn
        >>> kern_to_ekern('path/to/file.krn', 'path/to/file.ekrn')

        # Convert a list of .krn files to .ekrn files
        ```python
        krn_files = your_module.get_files()

        # Use the wrapper to avoid stopping the process if an error occurs
        def kern_to_ekern_wrapper(krn_file, ekern_file):
            try:
                kern_to_ekern(krn_file, ekern_file)
            except Exception as e:
                print(f'Error:{e}')

        # Convert all the files
        for krn_file in krn_files:
            output_file = krn_file.replace('.krn', '.ekrn')
            kern_to_ekern_wrapper(krn_file, output_file)
        ```

    """
    importer = Importer()
    document = importer.import_file(input_file)

    if len(importer.errors):
        raise Exception(f'ERROR: {input_file} has errors {importer.get_error_messages()}')

    export_options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES,
                                   kern_type=Encoding.eKern)
    exporter = Exporter()
    exported_ekern = exporter.export_string(document, export_options)

    with open(output_file, 'w') as file:
        file.write(exported_ekern)

load(fp, *, raise_on_errors=False, **kwargs)

Load a Document object from a Humdrum **kern file-like object.

Parameters:

Name Type Description Default
fp Union[str, Path]

A path-like object representing a **kern file.

required
raise_on_errors Optional[bool]

If True, raise an exception if any grammar error is detected during parsing.

False

Returns ((Document, List[str])): A tuple containing the Document object and a list of messages representing grammar errors detected during parsing. If the list is empty, the parsing did not detect any errors.

Raises:

Type Description
ValueError

If the Humdrum **kern representation could not be parsed.

Examples:

>>> import kernpy as kp
>>> document, errors = kp.load('BWV565.krn')
>>> if len(errors) > 0:
>>>     print(f"Grammar didn't recognize the following errors: {errors}")
['Error: Invalid **kern spine: 1', 'Error: Invalid **kern spine: 2']
>>>     # Anyway, we can use the Document
>>>     print(document)
>>> else:
>>>     print(document)
<kernpy.core.document.Document object at 0x7f8b3b7b3d90>
Source code in kernpy/io/public.py
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def load(fp: Union[str, Path], *, raise_on_errors: Optional[bool] = False, **kwargs) -> (Document, List[str]):
    """
    Load a Document object from a Humdrum **kern file-like object.

    Args:
        fp (Union[str, Path]): A path-like object representing a **kern file.
        raise_on_errors (Optional[bool], optional): If True, raise an exception if any grammar error is detected\
            during parsing.

    Returns ((Document, List[str])): A tuple containing the Document object and a list of messages representing \
        grammar errors detected during parsing. If the list is empty,\
        the parsing did not detect any errors.

    Raises:
        ValueError: If the Humdrum **kern representation could not be parsed.

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.load('BWV565.krn')
        >>> if len(errors) > 0:
        >>>     print(f"Grammar didn't recognize the following errors: {errors}")
        ['Error: Invalid **kern spine: 1', 'Error: Invalid **kern spine: 2']
        >>>     # Anyway, we can use the Document
        >>>     print(document)
        >>> else:
        >>>     print(document)
        <kernpy.core.document.Document object at 0x7f8b3b7b3d90>
    """
    return generic.Generic.read(
        path=fp,
        strict=raise_on_errors,
    )

loads(s, *, raise_on_errors=False, **kwargs)

Load a Document object from a string encoded in Humdrum **kern.

Args:
    s (str): A string containing a **kern file.
    raise_on_errors (Optional[bool], optional): If True, raise an exception if any grammar error is detected            during parsing.

Returns ((Document, List[str])): A tuple containing the Document object and a list of messages representing         grammar errors detected during parsing. If the list is empty,        the parsing did not detect any errors.

Raises:
    ValueError: If the Humdrum **kern representation could not be parsed.

Examples:
    >>> import kernpy as kp
    >>> document, errors = kp.loads('**kern

clefG2 =1 4c 4d 4e 4f ') >>> if len(errors) > 0: >>> print(f"Grammar didn't recognize the following errors: {errors}") ['Error: Invalid *kern spine: 1'] >>> # Anyway, we can use the Document >>> print(document) >>> else: >>> print(document)

Source code in kernpy/io/public.py
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
def loads(s, *, raise_on_errors: Optional[bool] = False, **kwargs) -> (Document, List[str]):
    """
    Load a Document object from a string encoded in Humdrum **kern.

    Args:
        s (str): A string containing a **kern file.
        raise_on_errors (Optional[bool], optional): If True, raise an exception if any grammar error is detected\
            during parsing.

    Returns ((Document, List[str])): A tuple containing the Document object and a list of messages representing \
        grammar errors detected during parsing. If the list is empty,\
        the parsing did not detect any errors.

    Raises:
        ValueError: If the Humdrum **kern representation could not be parsed.

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.loads('**kern\n*clefG2\n=1\n4c\n4d\n4e\n4f\n')
        >>> if len(errors) > 0:
        >>>     print(f"Grammar didn't recognize the following errors: {errors}")
        ['Error: Invalid **kern spine: 1']
        >>>     # Anyway, we can use the Document
        >>>     print(document)
        >>> else:
        >>>     print(document)
        <kernpy.core.document.Document object at 0x7f8b3b7b3d90>
    """
    return generic.Generic.create(
        content=s,
        strict=raise_on_errors,
    )

merge(contents, *, raise_on_errors=False)

Merge multiple **kern fragments into a single **kern string.      All the fragments should be presented in order. Each fragment does not need to be a complete **kern file. 
Warnings:
    Processing a large number of files in a row may take some time.
     This method performs as many `kp.read` operations as there are fragments to concatenate.
Args:
    contents (Sequence[str]): List of **kern strings
    raise_on_errors (Optional[bool], optional): If True, raise an exception if any grammar error is detected            during parsing.

Returns (Tuple[Document, List[Tuple[int, int]]]): Document object and       and a List of Pairs (Tuple[int, int]) representing the measure fragment indexes of the concatenated document.

Examples:
    >>> import kernpy as kp
    >>> contents = ['**kern

4e 4f 4g - -', 'kern 4a 4b 4c - = -', 'kern 4d 4e 4f - -'] >>> document, indexes = kp.concat(contents) >>> indexes [(0, 3), (3, 6), (6, 9)] >>> document, indexes = kp.concat(contents, separator=' ') >>> indexes [(0, 3), (3, 6), (6, 9)] >>> document, indexes = kp.concat(contents, separator='') >>> indexes [(0, 3), (3, 6), (6, 9)] >>> for start, end in indexes: >>> print(kp.dumps(document, from_measure=start, to_measure=end)))

Source code in kernpy/io/public.py
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
def merge(
        contents: List[str],
        *,
        raise_on_errors: Optional[bool] = False,
) -> Tuple[Document, List[Tuple[int, int]]]:
    """
    Merge multiple **kern fragments into a single **kern string. \
     All the fragments should be presented in order. Each fragment does not need to be a complete **kern file. \

    Warnings:
        Processing a large number of files in a row may take some time.
         This method performs as many `kp.read` operations as there are fragments to concatenate.
    Args:
        contents (Sequence[str]): List of **kern strings
        raise_on_errors (Optional[bool], optional): If True, raise an exception if any grammar error is detected\
            during parsing.

    Returns (Tuple[Document, List[Tuple[int, int]]]): Document object and \
      and a List of Pairs (Tuple[int, int]) representing the measure fragment indexes of the concatenated document.

    Examples:
        >>> import kernpy as kp
        >>> contents = ['**kern\n4e\n4f\n4g\n*-\n*-', '**kern\n4a\n4b\n4c\n*-\n=\n*-', '**kern\n4d\n4e\n4f\n*-\n*-']
        >>> document, indexes = kp.concat(contents)
        >>> indexes
        [(0, 3), (3, 6), (6, 9)]
        >>> document, indexes = kp.concat(contents, separator='\n')
        >>> indexes
        [(0, 3), (3, 6), (6, 9)]
        >>> document, indexes = kp.concat(contents, separator='')
        >>> indexes
        [(0, 3), (3, 6), (6, 9)]
        >>> for start, end in indexes:
        >>>     print(kp.dumps(document, from_measure=start, to_measure=end)))
    """
    return generic.Generic.merge(
        contents=contents,
        strict=raise_on_errors
    )

read(path, strict=False)

Read a Humdrum **kern file.

Parameters:

Name Type Description Default
path Union[str, Path]

File path to read

required
strict Optional[bool]

If True, raise an error if the **kern file has any errors. Otherwise, return a list of errors.

False

Returns (Document, List[str]): Document object and list of error messages. Empty list if no errors.

Examples:

>>> import kernpy as kp
>>> document, _ = kp.read('path/to/file.krn')
>>> document, errors = kp.read('path/to/file.krn')
>>> if len(errors) > 0:
>>>     print(errors)
['Error: Invalid **kern spine: 1', 'Error: Invalid **kern spine: 2']
Source code in kernpy/core/generic.py
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
@deprecated("Use 'load' instead.")
def read(
        path: Union[str, Path],
        strict: Optional[bool] = False
) -> (Document, List[str]):
    """
    Read a Humdrum **kern file.

    Args:
        path (Union[str, Path]): File path to read
        strict (Optional[bool]): If True, raise an error if the **kern file has any errors. Otherwise, return a list of errors.

    Returns (Document, List[str]): Document object and list of error messages. Empty list if no errors.

    Examples:
        >>> import kernpy as kp
        >>> document, _ = kp.read('path/to/file.krn')

        >>> document, errors = kp.read('path/to/file.krn')
        >>> if len(errors) > 0:
        >>>     print(errors)
        ['Error: Invalid **kern spine: 1', 'Error: Invalid **kern spine: 2']
    """
    return Generic.read(
        path=Path(path),
        strict=strict
    )

spine_types(document, headers=None)

Get the spines of a Document object.

Parameters:

Name Type Description Default
document Document

Document object to get spines from

required
headers Optional[Sequence[str]]

List of spine types to get. If None, all spines are returned. Using a header will return all the spines of that type.

None

Returns (List[str]): List of spines

Examples:

>>> import kernpy as kp
>>> document, _ = kp.read('path/to/file.krn')
>>> kp.spine_types(document)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> kp.spine_types(document, None)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> kp.spine_types(document, headers=None)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> kp.spine_types(document, headers=['**kern'])
['**kern', '**kern', '**kern', '**kern']
>>> kp.spine_types(document, headers=['**kern', '**root'])
['**kern', '**kern', '**kern', '**kern', '**root']
>>> kp.spine_types(document, headers=['**kern', '**root', '**harm'])
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> kp.spine_types(document, headers=[])
[]
Source code in kernpy/io/public.py
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
def spine_types(
        document: Document,
        headers: Optional[Sequence[str]] = None
) -> List[str]:
    """
    Get the spines of a Document object.

    Args:
        document (Document): Document object to get spines from
        headers (Optional[Sequence[str]]): List of spine types to get. If None, all spines are returned. Using a \
         header will return all the spines of that type.

    Returns (List[str]): List of spines

    Examples:
        >>> import kernpy as kp
        >>> document, _ = kp.read('path/to/file.krn')
        >>> kp.spine_types(document)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> kp.spine_types(document, None)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> kp.spine_types(document, headers=None)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> kp.spine_types(document, headers=['**kern'])
        ['**kern', '**kern', '**kern', '**kern']
        >>> kp.spine_types(document, headers=['**kern', '**root'])
        ['**kern', '**kern', '**kern', '**kern', '**root']
        >>> kp.spine_types(document, headers=['**kern', '**root', '**harm'])
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> kp.spine_types(document, headers=[])
        []
    """
    return generic.Generic.get_spine_types(
        document=document,
        spine_types=headers
    )

store(document, path, options)

Store a Document object to a file.

Parameters:

Name Type Description Default
document Document

Document object to store

required
path Union[str, Path]

File path to store

required
options ExportOptions

Export options

required

Returns: None

Examples:

>>> import kernpy as kp
>>> document, errors = kp.read('path/to/file.krn')
>>> options = kp.ExportOptions()
>>> kp.store(document, 'path/to/store.krn', options)
Source code in kernpy/core/generic.py
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
@deprecated("Use 'dump' instead.")
def store(
        document: Document,
        path: Union[str, Path],
        options: ExportOptions
) -> None:
    """
    Store a Document object to a file.

    Args:
        document (Document): Document object to store
        path (Union[str, Path]): File path to store
        options (ExportOptions): Export options

    Returns: None

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.read('path/to/file.krn')
        >>> options = kp.ExportOptions()
        >>> kp.store(document, 'path/to/store.krn', options)

    """
    Generic.store(
        document=document,
        path=Path(path),
        options=options
    )

store_graph(document, path)

Create a graph representation of a Document object using Graphviz. Save the graph to a file.

Parameters:

Name Type Description Default
document Document

Document object to create graph from

required
path str

File path to save the graph

required

Returns (None): None

Examples:

>>> import kernpy as kp
>>> document, errors = kp.read('path/to/file.krn')
>>> kp.store_graph(document, 'path/to/graph.dot')
Source code in kernpy/core/generic.py
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@deprecated("Use 'graph' instead.")
def store_graph(
        document: Document,
        path: Union[str, Path]
) -> None:
    """
    Create a graph representation of a Document object using Graphviz. Save the graph to a file.

    Args:
        document (Document): Document object to create graph from
        path (str): File path to save the graph

    Returns (None): None

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.read('path/to/file.krn')
        >>> kp.store_graph(document, 'path/to/graph.dot')
    """
    return Generic.store_graph(
        document=document,
        path=Path(path)
    )

transpose(input_encoding, interval, input_format=NotationEncoding.HUMDRUM.value, output_format=NotationEncoding.HUMDRUM.value, direction=Direction.UP.value)

Transpose a pitch by a given interval.

The pitch must be in the American notation.

Parameters:

Name Type Description Default
input_encoding str

The pitch to transpose.

required
interval int

The interval to transpose the pitch.

required
input_format str

The encoding format of the pitch. Default is HUMDRUM.

HUMDRUM.value
output_format str

The encoding format of the transposed pitch. Default is HUMDRUM.

HUMDRUM.value
direction str

The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

UP.value

Returns:

Name Type Description
str str

The transposed pitch.

Examples:

>>> transpose('ccc', IntervalsByName['P4'], input_format='kern', output_format='kern')
'fff'
>>> transpose('ccc', IntervalsByName['P4'], input_format=NotationEncoding.HUMDRUM.value)
'fff'
>>> transpose('ccc', IntervalsByName['P4'], input_format='kern', direction='down')
'gg'
>>> transpose('ccc', IntervalsByName['P4'], input_format='kern', direction=Direction.DOWN.value)
'gg'
>>> transpose('ccc#', IntervalsByName['P4'])
'fff#'
>>> transpose('G4', IntervalsByName['m3'], input_format='american')
'Bb4'
>>> transpose('G4', IntervalsByName['m3'], input_format=NotationEncoding.AMERICAN.value)
'Bb4'
>>> transpose('C3', IntervalsByName['P4'], input_format='american', direction='down')
'G2'
Source code in kernpy/core/transposer.py
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
def transpose(
        input_encoding: str,
        interval: int,
        input_format: str = NotationEncoding.HUMDRUM.value,
        output_format: str = NotationEncoding.HUMDRUM.value,
        direction: str = Direction.UP.value
) -> str:
    """
    Transpose a pitch by a given interval.

    The pitch must be in the American notation.

    Args:
        input_encoding (str): The pitch to transpose.
        interval (int): The interval to transpose the pitch.
        input_format (str): The encoding format of the pitch. Default is HUMDRUM.
        output_format (str): The encoding format of the transposed pitch. Default is HUMDRUM.
        direction (str): The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

    Returns:
        str: The transposed pitch.

    Examples:
        >>> transpose('ccc', IntervalsByName['P4'], input_format='kern', output_format='kern')
        'fff'
        >>> transpose('ccc', IntervalsByName['P4'], input_format=NotationEncoding.HUMDRUM.value)
        'fff'
        >>> transpose('ccc', IntervalsByName['P4'], input_format='kern', direction='down')
        'gg'
        >>> transpose('ccc', IntervalsByName['P4'], input_format='kern', direction=Direction.DOWN.value)
        'gg'
        >>> transpose('ccc#', IntervalsByName['P4'])
        'fff#'
        >>> transpose('G4', IntervalsByName['m3'], input_format='american')
        'Bb4'
        >>> transpose('G4', IntervalsByName['m3'], input_format=NotationEncoding.AMERICAN.value)
        'Bb4'
        >>> transpose('C3', IntervalsByName['P4'], input_format='american', direction='down')
        'G2'


    """
    importer = PitchImporterFactory.create(input_format)
    pitch: AgnosticPitch = importer.import_pitch(input_encoding)

    transposed_pitch = transpose_agnostics(pitch, interval, direction=direction)

    exporter = PitchExporterFactory.create(output_format)
    content = exporter.export_pitch(transposed_pitch)

    return content

transpose_agnostic_to_encoding(agnostic_pitch, interval, output_format=NotationEncoding.HUMDRUM.value, direction=Direction.UP.value)

Transpose an AgnosticPitch by a given interval.

Parameters:

Name Type Description Default
agnostic_pitch AgnosticPitch

The pitch to transpose.

required
interval int

The interval to transpose the pitch.

required
output_format Optional[str]

The encoding format of the transposed pitch. Default is HUMDRUM.

HUMDRUM.value
direction Optional[str]

The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

UP.value

Returns (str): str: The transposed pitch.

Examples:

>>> transpose_agnostic_to_encoding(AgnosticPitch('C', 4), IntervalsByName['P4'])
'F4'
>>> transpose_agnostic_to_encoding(AgnosticPitch('C', 4), IntervalsByName['P4'], direction='down')
'G3'
>>> transpose_agnostic_to_encoding(AgnosticPitch('C#', 4), IntervalsByName['P4'])
'F#4'
>>> transpose_agnostic_to_encoding(AgnosticPitch('G', 4), IntervalsByName['m3'], direction='down')
'Bb4'
Source code in kernpy/core/transposer.py
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
def transpose_agnostic_to_encoding(
        agnostic_pitch: AgnosticPitch,
        interval: int,
        output_format: str = NotationEncoding.HUMDRUM.value,
        direction: str = Direction.UP.value
) -> str:
    """
    Transpose an AgnosticPitch by a given interval.

    Args:
        agnostic_pitch (AgnosticPitch): The pitch to transpose.
        interval (int): The interval to transpose the pitch.
        output_format (Optional[str]): The encoding format of the transposed pitch. Default is HUMDRUM.
        direction (Optional[str]): The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

    Returns (str):
        str: The transposed pitch.

    Examples:
        >>> transpose_agnostic_to_encoding(AgnosticPitch('C', 4), IntervalsByName['P4'])
        'F4'
        >>> transpose_agnostic_to_encoding(AgnosticPitch('C', 4), IntervalsByName['P4'], direction='down')
        'G3'
        >>> transpose_agnostic_to_encoding(AgnosticPitch('C#', 4), IntervalsByName['P4'])
        'F#4'
        >>> transpose_agnostic_to_encoding(AgnosticPitch('G', 4), IntervalsByName['m3'], direction='down')
        'Bb4'
    """
    exporter = PitchExporterFactory.create(output_format)
    transposed_pitch = transpose_agnostics(agnostic_pitch, interval, direction=direction)
    content = exporter.export_pitch(transposed_pitch)

    return content

transpose_agnostics(input_pitch, interval, direction=Direction.UP.value)

Transpose an AgnosticPitch by a given interval.

Parameters:

Name Type Description Default
input_pitch AgnosticPitch

The pitch to transpose.

required
interval int

The interval to transpose the pitch.

required
direction str

The direction of the transposition. 'UP' or 'DOWN'. Default is 'UP'.

UP.value
Returns

AgnosticPitch: The transposed pitch.

Examples:

>>> transpose_agnostics(AgnosticPitch('C', 4), IntervalsByName['P4'])
AgnosticPitch('F', 4)
>>> transpose_agnostics(AgnosticPitch('C', 4), IntervalsByName['P4'], direction='down')
AgnosticPitch('G', 3)
>>> transpose_agnostics(AgnosticPitch('C#', 4), IntervalsByName['P4'])
AgnosticPitch('F#', 4)
>>> transpose_agnostics(AgnosticPitch('G', 4), IntervalsByName['m3'], direction='down')
AgnosticPitch('Bb', 4)
Source code in kernpy/core/transposer.py
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
def transpose_agnostics(
        input_pitch: AgnosticPitch,
        interval: int,
        direction: str = Direction.UP.value
) -> AgnosticPitch:
    """
    Transpose an AgnosticPitch by a given interval.

    Args:
        input_pitch (AgnosticPitch): The pitch to transpose.
        interval (int): The interval to transpose the pitch.
        direction (str): The direction of the transposition. 'UP' or 'DOWN'. Default is 'UP'.

    Returns :
        AgnosticPitch: The transposed pitch.

    Examples:
        >>> transpose_agnostics(AgnosticPitch('C', 4), IntervalsByName['P4'])
        AgnosticPitch('F', 4)
        >>> transpose_agnostics(AgnosticPitch('C', 4), IntervalsByName['P4'], direction='down')
        AgnosticPitch('G', 3)
        >>> transpose_agnostics(AgnosticPitch('C#', 4), IntervalsByName['P4'])
        AgnosticPitch('F#', 4)
        >>> transpose_agnostics(AgnosticPitch('G', 4), IntervalsByName['m3'], direction='down')
        AgnosticPitch('Bb', 4)

    """
    return AgnosticPitch.to_transposed(input_pitch, interval, direction)

transpose_encoding_to_agnostic(input_encoding, interval, input_format=NotationEncoding.HUMDRUM.value, direction=Direction.UP.value)

Transpose a pitch by a given interval.

The pitch must be in the American notation.

Parameters:

Name Type Description Default
input_encoding str

The pitch to transpose.

required
interval int

The interval to transpose the pitch.

required
input_format str

The encoding format of the pitch. Default is HUMDRUM.

HUMDRUM.value
direction str

The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

UP.value

Returns:

Name Type Description
AgnosticPitch AgnosticPitch

The transposed pitch.

Examples:

>>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format='kern')
AgnosticPitch('fff', 4)
>>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format=NotationEncoding.HUMDRUM.value)
AgnosticPitch('fff', 4)
>>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format='kern', direction='down')
AgnosticPitch('gg', 3)
>>> transpose_encoding_to_agnostic('ccc#', IntervalsByName['P4'])
AgnosticPitch('fff#', 4)
>>> transpose_encoding_to_agnostic('G4', IntervalsByName['m3'], input_format='american')
AgnosticPitch('Bb4', 4)
>>> transpose_encoding_to_agnostic('C3', IntervalsByName['P4'], input_format='american', direction='down')
AgnosticPitch('G2', 2)
Source code in kernpy/core/transposer.py
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
def transpose_encoding_to_agnostic(
        input_encoding: str,
        interval: int,
        input_format: str = NotationEncoding.HUMDRUM.value,
        direction: str = Direction.UP.value
) -> AgnosticPitch:
    """
    Transpose a pitch by a given interval.

    The pitch must be in the American notation.

    Args:
        input_encoding (str): The pitch to transpose.
        interval (int): The interval to transpose the pitch.
        input_format (str): The encoding format of the pitch. Default is HUMDRUM.
        direction (str): The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

    Returns:
        AgnosticPitch: The transposed pitch.

    Examples:
        >>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format='kern')
        AgnosticPitch('fff', 4)
        >>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format=NotationEncoding.HUMDRUM.value)
        AgnosticPitch('fff', 4)
        >>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format='kern', direction='down')
        AgnosticPitch('gg', 3)
        >>> transpose_encoding_to_agnostic('ccc#', IntervalsByName['P4'])
        AgnosticPitch('fff#', 4)
        >>> transpose_encoding_to_agnostic('G4', IntervalsByName['m3'], input_format='american')
        AgnosticPitch('Bb4', 4)
        >>> transpose_encoding_to_agnostic('C3', IntervalsByName['P4'], input_format='american', direction='down')
        AgnosticPitch('G2', 2)

    """
    importer = PitchImporterFactory.create(input_format)
    pitch: AgnosticPitch = importer.import_pitch(input_encoding)

    return transpose_agnostics(pitch, interval, direction=direction)

Modules

kernpy.core

=====

This module contains the core functionality of the kernpy package.

Intervals = {-2: 'dd1', -1: 'd1', 0: 'P1', 1: 'A1', 2: 'AA1', 3: 'dd2', 4: 'd2', 5: 'm2', 6: 'M2', 7: 'A2', 8: 'AA2', 9: 'dd3', 10: 'd3', 11: 'm3', 12: 'M3', 13: 'A3', 14: 'AA3', 15: 'dd4', 16: 'd4', 17: 'P4', 18: 'A4', 19: 'AA4', 21: 'dd5', 22: 'd5', 23: 'P5', 24: 'A5', 25: 'AA5', 26: 'dd6', 27: 'd6', 28: 'm6', 29: 'M6', 30: 'A6', 31: 'AA6', 32: 'dd7', 33: 'd7', 34: 'm7', 35: 'M7', 36: 'A7', 37: 'AA7', 40: 'octave'} module-attribute

Base-40 interval classes (d=diminished, m=minor, M=major, P=perfect, A=augmented)

AbstractToken

Bases: ABC

An abstract base class representing a token.

This class serves as a blueprint for creating various types of tokens, which are categorized based on their TokenCategory.

Attributes:

Name Type Description
encoding str

The original representation of the token.

category TokenCategory

The category of the token.

hidden bool

A flag indicating whether the token is hidden. Defaults to False.

Source code in kernpy/core/tokens.py
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
class AbstractToken(ABC):
    """
    An abstract base class representing a token.

    This class serves as a blueprint for creating various types of tokens, which are
    categorized based on their TokenCategory.

    Attributes:
        encoding (str): The original representation of the token.
        category (TokenCategory): The category of the token.
        hidden (bool): A flag indicating whether the token is hidden. Defaults to False.
    """

    def __init__(self, encoding: str, category: TokenCategory):
        """
        AbstractToken constructor

        Args:
            encoding (str): The original representation of the token.
            category (TokenCategory): The category of the token.
        """
        self.encoding = encoding
        self.category = category
        self.hidden = False

    @abstractmethod
    def export(self, **kwargs) -> str:
        """
        Exports the token.

        Keyword Arguments:
            filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
                indicating whether the token should be included in the export. If provided, only tokens for which the
                function returns True will be exported. Defaults to None. If None, all tokens will be exported.

        Returns:
            str: The encoded token representation, potentially filtered if a filter_categories function is provided.

        Examples:
            >>> token = AbstractToken('*clefF4', TokenCategory.SIGNATURES)
            >>> token.export()
            '*clefF4'
            >>> token.export(filter_categories=lambda cat: cat in {TokenCategory.SIGNATURES, TokenCategory.SIGNATURES.DURATION})
            '*clefF4'
        """
        pass


    def __str__(self):
        """
        Returns the string representation of the token.

        Returns (str): The string representation of the token without processing.
        """
        return self.export()

    def __eq__(self, other):
        """
        Compare two tokens.

        Args:
            other (AbstractToken): The other token to compare.
        Returns (bool): True if the tokens are equal, False otherwise.
        """
        if not isinstance(other, AbstractToken):
            return False
        return self.encoding == other.encoding and self.category == other.category

    def __ne__(self, other):
        """
        Compare two tokens.

        Args:
            other (AbstractToken): The other token to compare.
        Returns (bool): True if the tokens are different, False otherwise.
        """
        return not self.__eq__(other)

    def __hash__(self):
        """
        Returns the hash of the token.

        Returns (int): The hash of the token.
        """
        return hash((self.export(), self.category))

__eq__(other)

Compare two tokens.

Parameters:

Name Type Description Default
other AbstractToken

The other token to compare.

required

Returns (bool): True if the tokens are equal, False otherwise.

Source code in kernpy/core/tokens.py
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
def __eq__(self, other):
    """
    Compare two tokens.

    Args:
        other (AbstractToken): The other token to compare.
    Returns (bool): True if the tokens are equal, False otherwise.
    """
    if not isinstance(other, AbstractToken):
        return False
    return self.encoding == other.encoding and self.category == other.category

__hash__()

Returns the hash of the token.

Returns (int): The hash of the token.

Source code in kernpy/core/tokens.py
1462
1463
1464
1465
1466
1467
1468
def __hash__(self):
    """
    Returns the hash of the token.

    Returns (int): The hash of the token.
    """
    return hash((self.export(), self.category))

__init__(encoding, category)

AbstractToken constructor

Parameters:

Name Type Description Default
encoding str

The original representation of the token.

required
category TokenCategory

The category of the token.

required
Source code in kernpy/core/tokens.py
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
def __init__(self, encoding: str, category: TokenCategory):
    """
    AbstractToken constructor

    Args:
        encoding (str): The original representation of the token.
        category (TokenCategory): The category of the token.
    """
    self.encoding = encoding
    self.category = category
    self.hidden = False

__ne__(other)

Compare two tokens.

Parameters:

Name Type Description Default
other AbstractToken

The other token to compare.

required

Returns (bool): True if the tokens are different, False otherwise.

Source code in kernpy/core/tokens.py
1452
1453
1454
1455
1456
1457
1458
1459
1460
def __ne__(self, other):
    """
    Compare two tokens.

    Args:
        other (AbstractToken): The other token to compare.
    Returns (bool): True if the tokens are different, False otherwise.
    """
    return not self.__eq__(other)

__str__()

Returns the string representation of the token.

Returns (str): The string representation of the token without processing.

Source code in kernpy/core/tokens.py
1432
1433
1434
1435
1436
1437
1438
def __str__(self):
    """
    Returns the string representation of the token.

    Returns (str): The string representation of the token without processing.
    """
    return self.export()

export(**kwargs) abstractmethod

Exports the token.

Other Parameters:

Name Type Description
filter_categories Optional[Callable[[TokenCategory], bool]]

A function that takes a TokenCategory and returns a boolean indicating whether the token should be included in the export. If provided, only tokens for which the function returns True will be exported. Defaults to None. If None, all tokens will be exported.

Returns:

Name Type Description
str str

The encoded token representation, potentially filtered if a filter_categories function is provided.

Examples:

>>> token = AbstractToken('*clefF4', TokenCategory.SIGNATURES)
>>> token.export()
'*clefF4'
>>> token.export(filter_categories=lambda cat: cat in {TokenCategory.SIGNATURES, TokenCategory.SIGNATURES.DURATION})
'*clefF4'
Source code in kernpy/core/tokens.py
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
@abstractmethod
def export(self, **kwargs) -> str:
    """
    Exports the token.

    Keyword Arguments:
        filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
            indicating whether the token should be included in the export. If provided, only tokens for which the
            function returns True will be exported. Defaults to None. If None, all tokens will be exported.

    Returns:
        str: The encoded token representation, potentially filtered if a filter_categories function is provided.

    Examples:
        >>> token = AbstractToken('*clefF4', TokenCategory.SIGNATURES)
        >>> token.export()
        '*clefF4'
        >>> token.export(filter_categories=lambda cat: cat in {TokenCategory.SIGNATURES, TokenCategory.SIGNATURES.DURATION})
        '*clefF4'
    """
    pass

AgnosticPitch

Represents a pitch in a generic way, independent of the notation system used.

Source code in kernpy/core/pitch_models.py
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
class AgnosticPitch:
    """
    Represents a pitch in a generic way, independent of the notation system used.
    """

    ASCENDANT_ACCIDENTAL_ALTERATION = '+'
    DESCENDENT_ACCIDENTAL_ALTERATION = '-'
    ACCIDENTAL_ALTERATIONS = {
        ASCENDANT_ACCIDENTAL_ALTERATION,
        DESCENDENT_ACCIDENTAL_ALTERATION
    }


    def __init__(self, name: str, octave: int):
        """
        Initialize the AgnosticPitch object.

        Args:
            name (str): The name of the pitch (e.g., 'C', 'D#', 'Bb').
            octave (int): The octave of the pitch (e.g., 4 for middle C).
        """
        self.name = name
        self.octave = octave

    @property
    def name(self):
        return self.__name

    @name.setter
    def name(self, name):
        accidentals = ''.join([c for c in name if c in ['-', '+']])
        name = name.upper()
        name = name.replace('#', '+').replace('b', '-')

        check_name = name.replace('+', '').replace('-', '')
        if check_name not in pitches:
            raise ValueError(f"Invalid pitch: {name}")
        if len(accidentals) > 3:
            raise ValueError(f"Invalid pitch: {name}. Maximum of 3 accidentals allowed. ")
        self.__name = name

    @property
    def octave(self):
        return self.__octave

    @octave.setter
    def octave(self, octave):
        if not isinstance(octave, int):
            raise ValueError(f"Invalid octave: {octave}")
        self.__octave = octave

    def get_chroma(self):
        return 40 * self.octave + Chromas[self.name]

    @classmethod
    def to_transposed(cls, agnostic_pitch: 'AgnosticPitch', raw_interval, direction: str = Direction.UP.value) -> 'AgnosticPitch':
        delta = raw_interval if direction == Direction.UP.value else - raw_interval
        chroma = agnostic_pitch.get_chroma() + delta
        name = ChromasByValue[chroma % 40]
        octave = chroma // 40
        return AgnosticPitch(name, octave)

    @classmethod
    def get_chroma_from_interval(cls, pitch_a: 'AgnosticPitch', pitch_b: 'AgnosticPitch'):
        return pitch_b.get_chroma() - pitch_a.get_chroma()

    def __str__(self):
        return f"<{self.name}, {self.octave}>"

    def __repr__(self):
        return f"{self.__name}(name={self.name}, octave={self.octave})"

    def __eq__(self, other):
        if not isinstance(other, AgnosticPitch):
            return False
        return self.name == other.name and self.octave == other.octave

    def __ne__(self, other):
        if not isinstance(other, AgnosticPitch):
            return True
        return self.name != other.name or self.octave != other.octave

    def __hash__(self):
        return hash((self.name, self.octave))

    def __lt__(self, other):
        if not isinstance(other, AgnosticPitch):
            return NotImplemented
        if self.octave == other.octave:
            return Chromas[self.name] < Chromas[other.name]
        return self.octave < other.octave

    def __gt__(self, other):
        if not isinstance(other, AgnosticPitch):
            return NotImplemented
        if self.octave == other.octave:
            return Chromas[self.name] > Chromas[other.name]
        return self.octave > other.octave

__init__(name, octave)

Initialize the AgnosticPitch object.

Parameters:

Name Type Description Default
name str

The name of the pitch (e.g., 'C', 'D#', 'Bb').

required
octave int

The octave of the pitch (e.g., 4 for middle C).

required
Source code in kernpy/core/pitch_models.py
85
86
87
88
89
90
91
92
93
94
def __init__(self, name: str, octave: int):
    """
    Initialize the AgnosticPitch object.

    Args:
        name (str): The name of the pitch (e.g., 'C', 'D#', 'Bb').
        octave (int): The octave of the pitch (e.g., 4 for middle C).
    """
    self.name = name
    self.octave = octave

Alteration

Bases: Enum

Enum for the alteration of a pitch.

Source code in kernpy/core/gkern.py
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
class Alteration(Enum):
    """
    Enum for the alteration of a pitch.
    """
    NONE = 0
    SHARP = 1
    FLAT = -1
    DOUBLE_SHARP = 2
    DOUBLE_FLAT = -2
    TRIPLE_SHARP = 3
    TRIPLE_FLAT = -3
    HALF_SHARP = 0.5
    HALF_FLAT = -0.5
    QUARTER_SHARP = 0.25
    QUARTER_FLAT = -0.25

    def __str__(self) -> str:
        return self.name

BarToken

Bases: SimpleToken

BarToken class.

Source code in kernpy/core/tokens.py
1645
1646
1647
1648
1649
1650
1651
class BarToken(SimpleToken):
    """
    BarToken class.
    """

    def __init__(self, encoding):
        super().__init__(encoding, TokenCategory.BARLINES)

BasicSpineImporter

Bases: SpineImporter

Source code in kernpy/core/basic_spine_importer.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
class BasicSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()  # TODO: Create a custom functional listener for BasicSpineImporter

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.OTHER)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.BARLINES,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if not any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.OTHER)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/basic_spine_importer.py
12
13
14
15
16
17
18
19
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

BekernTokenizer

Bases: Tokenizer

BekernTokenizer converts a Token into a bekern (Basic Extended **kern) string representation. This format use a '@' separator for the main tokens but discards all the decorations tokens.

Source code in kernpy/core/tokenizers.py
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
class BekernTokenizer(Tokenizer):
    """
    BekernTokenizer converts a Token into a bekern (Basic Extended **kern) string representation. This format use a '@' separator for the \
    main tokens but discards all the decorations tokens.
    """

    def __init__(self, *, token_categories: Set['TokenCategory']):
        """
        Create a new BekernTokenizer

        Args:
            token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
        """
        super().__init__(token_categories=token_categories)

    def tokenize(self, token: Token) -> str:
        """
        Tokenize a token into a bekern string representation.
        Args:
            token (Token): Token to be tokenized.

        Returns (str): bekern string representation.

        Examples:
            >>> token.encoding
            '2@.@bb@-·_·L'
            >>> BekernTokenizer().tokenize(token)
            '2@.@bb@-'
        """
        ekern_content = token.export(filter_categories=lambda cat: cat in self.token_categories)

        if DECORATION_SEPARATOR not in ekern_content:
            return ekern_content

        reduced_content = ekern_content.split(DECORATION_SEPARATOR)[0]
        if reduced_content.endswith(TOKEN_SEPARATOR):
            reduced_content = reduced_content[:-1]

        return reduced_content

__init__(*, token_categories)

Create a new BekernTokenizer

Parameters:

Name Type Description Default
token_categories Set[TokenCategory]

List of categories to be tokenized. If None will raise an exception.

required
Source code in kernpy/core/tokenizers.py
153
154
155
156
157
158
159
160
def __init__(self, *, token_categories: Set['TokenCategory']):
    """
    Create a new BekernTokenizer

    Args:
        token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
    """
    super().__init__(token_categories=token_categories)

tokenize(token)

Tokenize a token into a bekern string representation. Args: token (Token): Token to be tokenized.

Returns (str): bekern string representation.

Examples:

>>> token.encoding
'2@.@bb@-·_·L'
>>> BekernTokenizer().tokenize(token)
'2@.@bb@-'
Source code in kernpy/core/tokenizers.py
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
def tokenize(self, token: Token) -> str:
    """
    Tokenize a token into a bekern string representation.
    Args:
        token (Token): Token to be tokenized.

    Returns (str): bekern string representation.

    Examples:
        >>> token.encoding
        '2@.@bb@-·_·L'
        >>> BekernTokenizer().tokenize(token)
        '2@.@bb@-'
    """
    ekern_content = token.export(filter_categories=lambda cat: cat in self.token_categories)

    if DECORATION_SEPARATOR not in ekern_content:
        return ekern_content

    reduced_content = ekern_content.split(DECORATION_SEPARATOR)[0]
    if reduced_content.endswith(TOKEN_SEPARATOR):
        reduced_content = reduced_content[:-1]

    return reduced_content

BkernTokenizer

Bases: Tokenizer

BkernTokenizer converts a Token into a bkern (Basic kern) string representation. This format use the main tokens but not the decorations tokens. This format is a lightweight version of the classic Humdrum kern format.

Source code in kernpy/core/tokenizers.py
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
class BkernTokenizer(Tokenizer):
    """
    BkernTokenizer converts a Token into a bkern (Basic **kern) string representation. This format use \
    the main tokens but not the decorations tokens. This format is a lightweight version of the classic
    Humdrum **kern format.
    """

    def __init__(self, *, token_categories: Set['TokenCategory']):
        """
        Create a new BkernTokenizer

        Args:
            token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
        """
        super().__init__(token_categories=token_categories)


    def tokenize(self, token: Token) -> str:
        """
        Tokenize a token into a bkern string representation.
        Args:
            token (Token): Token to be tokenized.

        Returns (str): bkern string representation.

        Examples:
            >>> token.encoding
            '2@.@bb@-·_·L'
            >>> BkernTokenizer().tokenize(token)
            '2.bb-'
        """
        return BekernTokenizer(token_categories=self.token_categories).tokenize(token).replace(TOKEN_SEPARATOR, '')

__init__(*, token_categories)

Create a new BkernTokenizer

Parameters:

Name Type Description Default
token_categories Set[TokenCategory]

List of categories to be tokenized. If None will raise an exception.

required
Source code in kernpy/core/tokenizers.py
195
196
197
198
199
200
201
202
def __init__(self, *, token_categories: Set['TokenCategory']):
    """
    Create a new BkernTokenizer

    Args:
        token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
    """
    super().__init__(token_categories=token_categories)

tokenize(token)

Tokenize a token into a bkern string representation. Args: token (Token): Token to be tokenized.

Returns (str): bkern string representation.

Examples:

>>> token.encoding
'2@.@bb@-·_·L'
>>> BkernTokenizer().tokenize(token)
'2.bb-'
Source code in kernpy/core/tokenizers.py
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
def tokenize(self, token: Token) -> str:
    """
    Tokenize a token into a bkern string representation.
    Args:
        token (Token): Token to be tokenized.

    Returns (str): bkern string representation.

    Examples:
        >>> token.encoding
        '2@.@bb@-·_·L'
        >>> BkernTokenizer().tokenize(token)
        '2.bb-'
    """
    return BekernTokenizer(token_categories=self.token_categories).tokenize(token).replace(TOKEN_SEPARATOR, '')

BoundingBox

BoundingBox class.

It contains the coordinates of the score bounding box. Useful for full-page tasks.

Attributes:

Name Type Description
from_x int

The x coordinate of the top left corner

from_y int

The y coordinate of the top left corner

to_x int

The x coordinate of the bottom right corner

to_y int

The y coordinate of the bottom right corner

Source code in kernpy/core/tokens.py
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
class BoundingBox:
    """
    BoundingBox class.

    It contains the coordinates of the score bounding box. Useful for full-page tasks.

    Attributes:
        from_x (int): The x coordinate of the top left corner
        from_y (int): The y coordinate of the top left corner
        to_x (int): The x coordinate of the bottom right corner
        to_y (int): The y coordinate of the bottom right corner
    """

    def __init__(self, x, y, w, h):
        """
        BoundingBox constructor.

        Args:
            x (int): The x coordinate of the top left corner
            y (int): The y coordinate of the top left corner
            w (int): The width
            h (int): The height
        """
        self.from_x = x
        self.from_y = y
        self.to_x = x + w
        self.to_y = y + h

    def w(self) -> int:
        """
        Returns the width of the box

        Returns:
            int: The width of the box
        """
        return self.to_x - self.from_x

    def h(self) -> int:
        """
        Returns the height of the box

        Returns:
            int: The height of the box
        return self.to_y - self.from_y
        """
        return self.to_y - self.from_y

    def extend(self, bounding_box) -> None:
        """
        Extends the bounding box. Modify the current object.

        Args:
            bounding_box (BoundingBox): The bounding box to extend

        Returns:
            None
        """
        self.from_x = min(self.from_x, bounding_box.from_x)
        self.from_y = min(self.from_y, bounding_box.from_y)
        self.to_x = max(self.to_x, bounding_box.to_x)
        self.to_y = max(self.to_y, bounding_box.to_y)

    def __str__(self) -> str:
        """
        Returns a string representation of the bounding box

        Returns (str): The string representation of the bounding box
        """
        return f'(x={self.from_x}, y={self.from_y}, w={self.w()}, h={self.h()})'

    def xywh(self) -> str:
        """
        Returns a string representation of the bounding box.

        Returns:
            str: The string representation of the bounding box
        """
        return f'{self.from_x},{self.from_y},{self.w()},{self.h()}'

__init__(x, y, w, h)

BoundingBox constructor.

Parameters:

Name Type Description Default
x int

The x coordinate of the top left corner

required
y int

The y coordinate of the top left corner

required
w int

The width

required
h int

The height

required
Source code in kernpy/core/tokens.py
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
def __init__(self, x, y, w, h):
    """
    BoundingBox constructor.

    Args:
        x (int): The x coordinate of the top left corner
        y (int): The y coordinate of the top left corner
        w (int): The width
        h (int): The height
    """
    self.from_x = x
    self.from_y = y
    self.to_x = x + w
    self.to_y = y + h

__str__()

Returns a string representation of the bounding box

Returns (str): The string representation of the bounding box

Source code in kernpy/core/tokens.py
1950
1951
1952
1953
1954
1955
1956
def __str__(self) -> str:
    """
    Returns a string representation of the bounding box

    Returns (str): The string representation of the bounding box
    """
    return f'(x={self.from_x}, y={self.from_y}, w={self.w()}, h={self.h()})'

extend(bounding_box)

Extends the bounding box. Modify the current object.

Parameters:

Name Type Description Default
bounding_box BoundingBox

The bounding box to extend

required

Returns:

Type Description
None

None

Source code in kernpy/core/tokens.py
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
def extend(self, bounding_box) -> None:
    """
    Extends the bounding box. Modify the current object.

    Args:
        bounding_box (BoundingBox): The bounding box to extend

    Returns:
        None
    """
    self.from_x = min(self.from_x, bounding_box.from_x)
    self.from_y = min(self.from_y, bounding_box.from_y)
    self.to_x = max(self.to_x, bounding_box.to_x)
    self.to_y = max(self.to_y, bounding_box.to_y)

h()

Returns the height of the box

Returns:

Name Type Description
int int

The height of the box

return self.to_y - self.from_y

Source code in kernpy/core/tokens.py
1925
1926
1927
1928
1929
1930
1931
1932
1933
def h(self) -> int:
    """
    Returns the height of the box

    Returns:
        int: The height of the box
    return self.to_y - self.from_y
    """
    return self.to_y - self.from_y

w()

Returns the width of the box

Returns:

Name Type Description
int int

The width of the box

Source code in kernpy/core/tokens.py
1916
1917
1918
1919
1920
1921
1922
1923
def w(self) -> int:
    """
    Returns the width of the box

    Returns:
        int: The width of the box
    """
    return self.to_x - self.from_x

xywh()

Returns a string representation of the bounding box.

Returns:

Name Type Description
str str

The string representation of the bounding box

Source code in kernpy/core/tokens.py
1958
1959
1960
1961
1962
1963
1964
1965
def xywh(self) -> str:
    """
    Returns a string representation of the bounding box.

    Returns:
        str: The string representation of the bounding box
    """
    return f'{self.from_x},{self.from_y},{self.w()},{self.h()}'

BoundingBoxMeasures

BoundingBoxMeasures class.

Source code in kernpy/core/document.py
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
class BoundingBoxMeasures:
    """
    BoundingBoxMeasures class.
    """

    def __init__(
            self,
            bounding_box,
            from_measure: int,
            to_measure: int
    ):
        """
        Create an instance of BoundingBoxMeasures.

        Args:
            bounding_box: The bounding box object of the node.
            from_measure (int): The first measure of the score in the BoundingBoxMeasures object.
            to_measure (int): The last measure of the score in the BoundingBoxMeasures object.
        """
        self.from_measure = from_measure
        self.to_measure = to_measure
        self.bounding_box = bounding_box

__init__(bounding_box, from_measure, to_measure)

Create an instance of BoundingBoxMeasures.

Parameters:

Name Type Description Default
bounding_box

The bounding box object of the node.

required
from_measure int

The first measure of the score in the BoundingBoxMeasures object.

required
to_measure int

The last measure of the score in the BoundingBoxMeasures object.

required
Source code in kernpy/core/document.py
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
def __init__(
        self,
        bounding_box,
        from_measure: int,
        to_measure: int
):
    """
    Create an instance of BoundingBoxMeasures.

    Args:
        bounding_box: The bounding box object of the node.
        from_measure (int): The first measure of the score in the BoundingBoxMeasures object.
        to_measure (int): The last measure of the score in the BoundingBoxMeasures object.
    """
    self.from_measure = from_measure
    self.to_measure = to_measure
    self.bounding_box = bounding_box

BoundingBoxToken

Bases: Token

BoundingBoxToken class.

It contains the coordinates of the score bounding box. Useful for full-page tasks.

Attributes:

Name Type Description
encoding str

The complete unprocessed encoding

page_number int

The page number

bounding_box BoundingBox

The bounding box

Source code in kernpy/core/tokens.py
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
class BoundingBoxToken(Token):
    """
    BoundingBoxToken class.

    It contains the coordinates of the score bounding box. Useful for full-page tasks.

    Attributes:
        encoding (str): The complete unprocessed encoding
        page_number (int): The page number
        bounding_box (BoundingBox): The bounding box
    """

    def __init__(
            self,
            encoding: str,
            page_number: int,
            bounding_box: BoundingBox
    ):
        """
        BoundingBoxToken constructor.

        Args:
            encoding (str): The complete unprocessed encoding
            page_number (int): The page number
            bounding_box (BoundingBox): The bounding box
        """
        super().__init__(encoding, TokenCategory.BOUNDING_BOXES)
        self.page_number = page_number
        self.bounding_box = bounding_box

    def export(self, **kwargs) -> str:
        return self.encoding

__init__(encoding, page_number, bounding_box)

BoundingBoxToken constructor.

Parameters:

Name Type Description Default
encoding str

The complete unprocessed encoding

required
page_number int

The page number

required
bounding_box BoundingBox

The bounding box

required
Source code in kernpy/core/tokens.py
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
def __init__(
        self,
        encoding: str,
        page_number: int,
        bounding_box: BoundingBox
):
    """
    BoundingBoxToken constructor.

    Args:
        encoding (str): The complete unprocessed encoding
        page_number (int): The page number
        bounding_box (BoundingBox): The bounding box
    """
    super().__init__(encoding, TokenCategory.BOUNDING_BOXES)
    self.page_number = page_number
    self.bounding_box = bounding_box

C1Clef

Bases: Clef

Source code in kernpy/core/gkern.py
391
392
393
394
395
396
397
398
399
400
401
402
class C1Clef(Clef):
    def __init__(self):
        """
        Initializes the C Clef object.
        """
        super().__init__(DiatonicPitch('C'), 1)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('C', 3)

__init__()

Initializes the C Clef object.

Source code in kernpy/core/gkern.py
392
393
394
395
396
def __init__(self):
    """
    Initializes the C Clef object.
    """
    super().__init__(DiatonicPitch('C'), 1)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
398
399
400
401
402
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('C', 3)

C2Clef

Bases: Clef

Source code in kernpy/core/gkern.py
404
405
406
407
408
409
410
411
412
413
414
415
class C2Clef(Clef):
    def __init__(self):
        """
        Initializes the C Clef object.
        """
        super().__init__(DiatonicPitch('A'), 2)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('A', 2)

__init__()

Initializes the C Clef object.

Source code in kernpy/core/gkern.py
405
406
407
408
409
def __init__(self):
    """
    Initializes the C Clef object.
    """
    super().__init__(DiatonicPitch('A'), 2)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
411
412
413
414
415
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('A', 2)

C3Clef

Bases: Clef

Source code in kernpy/core/gkern.py
418
419
420
421
422
423
424
425
426
427
428
429
class C3Clef(Clef):
    def __init__(self):
        """
        Initializes the C Clef object.
        """
        super().__init__(DiatonicPitch('C'), 3)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('B', 2)

__init__()

Initializes the C Clef object.

Source code in kernpy/core/gkern.py
419
420
421
422
423
def __init__(self):
    """
    Initializes the C Clef object.
    """
    super().__init__(DiatonicPitch('C'), 3)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
425
426
427
428
429
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('B', 2)

C4Clef

Bases: Clef

Source code in kernpy/core/gkern.py
431
432
433
434
435
436
437
438
439
440
441
442
class C4Clef(Clef):
    def __init__(self):
        """
        Initializes the C Clef object.
        """
        super().__init__(DiatonicPitch('C'), 4)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('D', 2)

__init__()

Initializes the C Clef object.

Source code in kernpy/core/gkern.py
432
433
434
435
436
def __init__(self):
    """
    Initializes the C Clef object.
    """
    super().__init__(DiatonicPitch('C'), 4)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
438
439
440
441
442
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('D', 2)

ChordToken

Bases: SimpleToken

ChordToken class.

It contains a list of compound tokens

Source code in kernpy/core/tokens.py
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
class ChordToken(SimpleToken):
    """
    ChordToken class.

    It contains a list of compound tokens
    """

    def __init__(self,
                 encoding: str,
                 category: TokenCategory,
                 notes_tokens: Sequence[Token]
                 ):
        """
        ChordToken constructor.

        Args:
            encoding (str): The complete unprocessed encoding
            category (TokenCategory): The token category, one of TokenCategory
            notes_tokens (Sequence[Token]): The subtokens for the notes. Individual elements of the token, of type token
        """
        super().__init__(encoding, category)
        self.notes_tokens = notes_tokens

    def export(self, **kwargs) -> str:
        result = ''
        for note_token in self.notes_tokens:
            if len(result) > 0:
                result += ' '

            result += note_token.export()

        return result

__init__(encoding, category, notes_tokens)

ChordToken constructor.

Parameters:

Name Type Description Default
encoding str

The complete unprocessed encoding

required
category TokenCategory

The token category, one of TokenCategory

required
notes_tokens Sequence[Token]

The subtokens for the notes. Individual elements of the token, of type token

required
Source code in kernpy/core/tokens.py
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
def __init__(self,
             encoding: str,
             category: TokenCategory,
             notes_tokens: Sequence[Token]
             ):
    """
    ChordToken constructor.

    Args:
        encoding (str): The complete unprocessed encoding
        category (TokenCategory): The token category, one of TokenCategory
        notes_tokens (Sequence[Token]): The subtokens for the notes. Individual elements of the token, of type token
    """
    super().__init__(encoding, category)
    self.notes_tokens = notes_tokens

Clef

Bases: ABC

Abstract class representing a clef.

Source code in kernpy/core/gkern.py
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
class Clef(ABC):
    """
    Abstract class representing a clef.
    """

    def __init__(self, diatonic_pitch: DiatonicPitch, on_line: int):
        """
        Initializes the Clef object.
        Args:
            diatonic_pitch (DiatonicPitch): The diatonic pitch of the clef (e.g., 'C', 'G', 'F'). This value is used as a decorator.
            on_line (int): The line number on which the clef is placed (1 for bottom line, 2 for 1st line from bottom, etc.). This value is used as a decorator.
        """
        self.diatonic_pitch = diatonic_pitch
        self.on_line = on_line

    @abstractmethod
    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        ...

    def name(self):
        """
        Returns the name of the clef.
        """
        return f"{self.diatonic_pitch} on line {self.on_line}"

    def reference_point(self) -> PitchPositionReferenceSystem:
        """
        Returns the reference point for the clef.
        """
        return PitchPositionReferenceSystem(self.bottom_line())

    def __str__(self) -> str:
        """
        Returns:
            str: The string representation of the clef.
        """
        return f'{self.diatonic_pitch.encoding.upper()} on the {self.on_line}{self._ordinal_suffix(self.on_line)} line'

    @staticmethod
    def _ordinal_suffix(number: int) -> str:
        """
        Returns the ordinal suffix for a given integer (e.g. 'st', 'nd', 'rd', 'th').

        Args:
            number (int): The number to get the suffix for.

        Returns:
            str: The ordinal suffix.
        """
        # 11, 12, 13 always take “th”
        if 11 <= (number % 100) <= 13:
            return 'th'
        # otherwise use last digit
        last = number % 10
        if last == 1:
            return 'st'
        elif last == 2:
            return 'nd'
        elif last == 3:
            return 'rd'
        else:
            return 'th'

__init__(diatonic_pitch, on_line)

Initializes the Clef object. Args: diatonic_pitch (DiatonicPitch): The diatonic pitch of the clef (e.g., 'C', 'G', 'F'). This value is used as a decorator. on_line (int): The line number on which the clef is placed (1 for bottom line, 2 for 1st line from bottom, etc.). This value is used as a decorator.

Source code in kernpy/core/gkern.py
290
291
292
293
294
295
296
297
298
def __init__(self, diatonic_pitch: DiatonicPitch, on_line: int):
    """
    Initializes the Clef object.
    Args:
        diatonic_pitch (DiatonicPitch): The diatonic pitch of the clef (e.g., 'C', 'G', 'F'). This value is used as a decorator.
        on_line (int): The line number on which the clef is placed (1 for bottom line, 2 for 1st line from bottom, etc.). This value is used as a decorator.
    """
    self.diatonic_pitch = diatonic_pitch
    self.on_line = on_line

__str__()

Returns:

Name Type Description
str str

The string representation of the clef.

Source code in kernpy/core/gkern.py
319
320
321
322
323
324
def __str__(self) -> str:
    """
    Returns:
        str: The string representation of the clef.
    """
    return f'{self.diatonic_pitch.encoding.upper()} on the {self.on_line}{self._ordinal_suffix(self.on_line)} line'

bottom_line() abstractmethod

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
300
301
302
303
304
305
@abstractmethod
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    ...

name()

Returns the name of the clef.

Source code in kernpy/core/gkern.py
307
308
309
310
311
def name(self):
    """
    Returns the name of the clef.
    """
    return f"{self.diatonic_pitch} on line {self.on_line}"

reference_point()

Returns the reference point for the clef.

Source code in kernpy/core/gkern.py
313
314
315
316
317
def reference_point(self) -> PitchPositionReferenceSystem:
    """
    Returns the reference point for the clef.
    """
    return PitchPositionReferenceSystem(self.bottom_line())

ClefFactory

Source code in kernpy/core/gkern.py
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
class ClefFactory:
    CLEF_NAMES = { 'G', 'F', 'C' }
    @classmethod
    def create_clef(cls, encoding: str) -> Clef:
        """
        Creates a Clef object based on the given token.

        Clefs are encoded in interpretation tokens that start with a single * followed by the string clef and then the shape and line position of the clef. For example, a treble clef is *clefG2, with G meaning a G-clef, and 2 meaning that the clef is centered on the second line up from the bottom of the staff. The bass clef is *clefF4 since it is an F-clef on the fourth line of the staff.
        A vocal tenor clef is represented by *clefGv2, where the v means the music should be played an octave lower than the regular clef’s sounding pitches. Try creating a vocal tenor clef in the above interactive example. The v operator also works on the other clefs (but these sorts of clefs are very rare). Another rare clef is *clefG^2 which is the opposite of *clefGv2, where the music is written an octave lower than actually sounding pitch for the normal form of the clef. You can also try to create exotic two-octave clefs by doubling the ^^ and vv markers.

        Args:
            encoding (str): The encoding of the clef token.

        Returns:

        """
        encoding = encoding.replace('*clef', '')

        # at this point the encoding is like G2, F4,... or Gv2, F^4,... or G^^2, Fvv4,... or G^^...^^2, Fvvv4,...
        name = list(filter(lambda x: x in cls.CLEF_NAMES, encoding))[0]
        line = int(list(filter(lambda x: x.isdigit(), encoding))[0])
        decorators = ''.join(filter(lambda x: x in ['^', 'v'], encoding))

        if name not in cls.CLEF_NAMES:
            raise ValueError(f"Invalid clef name: {name}. Expected one of {cls.CLEF_NAMES}.")

        if name == 'G':
            return GClef()
        elif name == 'F':
            if line == 3:
                return F3Clef()
            elif line == 4:
                return F4Clef()
            else:
                raise ValueError(f"Invalid F clef line: {line}. Expected 3 or 4.")
        elif name == 'C':
            if line == 1:
                return C1Clef()
            elif line == 2:
                return C2Clef()
            elif line == 3:
                return C3Clef()
            elif line == 4:
                return C4Clef()
            else:
                raise ValueError(f"Invalid C clef line: {line}. Expected 1, 2, 3 or 4.")
        else:
            raise ValueError(f"Invalid clef name: {name}. Expected one of {cls.CLEF_NAMES}.")

create_clef(encoding) classmethod

Creates a Clef object based on the given token.

Clefs are encoded in interpretation tokens that start with a single * followed by the string clef and then the shape and line position of the clef. For example, a treble clef is clefG2, with G meaning a G-clef, and 2 meaning that the clef is centered on the second line up from the bottom of the staff. The bass clef is clefF4 since it is an F-clef on the fourth line of the staff. A vocal tenor clef is represented by clefGv2, where the v means the music should be played an octave lower than the regular clef’s sounding pitches. Try creating a vocal tenor clef in the above interactive example. The v operator also works on the other clefs (but these sorts of clefs are very rare). Another rare clef is clefG^2 which is the opposite of *clefGv2, where the music is written an octave lower than actually sounding pitch for the normal form of the clef. You can also try to create exotic two-octave clefs by doubling the ^^ and vv markers.

Parameters:

Name Type Description Default
encoding str

The encoding of the clef token.

required

Returns:

Source code in kernpy/core/gkern.py
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
@classmethod
def create_clef(cls, encoding: str) -> Clef:
    """
    Creates a Clef object based on the given token.

    Clefs are encoded in interpretation tokens that start with a single * followed by the string clef and then the shape and line position of the clef. For example, a treble clef is *clefG2, with G meaning a G-clef, and 2 meaning that the clef is centered on the second line up from the bottom of the staff. The bass clef is *clefF4 since it is an F-clef on the fourth line of the staff.
    A vocal tenor clef is represented by *clefGv2, where the v means the music should be played an octave lower than the regular clef’s sounding pitches. Try creating a vocal tenor clef in the above interactive example. The v operator also works on the other clefs (but these sorts of clefs are very rare). Another rare clef is *clefG^2 which is the opposite of *clefGv2, where the music is written an octave lower than actually sounding pitch for the normal form of the clef. You can also try to create exotic two-octave clefs by doubling the ^^ and vv markers.

    Args:
        encoding (str): The encoding of the clef token.

    Returns:

    """
    encoding = encoding.replace('*clef', '')

    # at this point the encoding is like G2, F4,... or Gv2, F^4,... or G^^2, Fvv4,... or G^^...^^2, Fvvv4,...
    name = list(filter(lambda x: x in cls.CLEF_NAMES, encoding))[0]
    line = int(list(filter(lambda x: x.isdigit(), encoding))[0])
    decorators = ''.join(filter(lambda x: x in ['^', 'v'], encoding))

    if name not in cls.CLEF_NAMES:
        raise ValueError(f"Invalid clef name: {name}. Expected one of {cls.CLEF_NAMES}.")

    if name == 'G':
        return GClef()
    elif name == 'F':
        if line == 3:
            return F3Clef()
        elif line == 4:
            return F4Clef()
        else:
            raise ValueError(f"Invalid F clef line: {line}. Expected 3 or 4.")
    elif name == 'C':
        if line == 1:
            return C1Clef()
        elif line == 2:
            return C2Clef()
        elif line == 3:
            return C3Clef()
        elif line == 4:
            return C4Clef()
        else:
            raise ValueError(f"Invalid C clef line: {line}. Expected 1, 2, 3 or 4.")
    else:
        raise ValueError(f"Invalid clef name: {name}. Expected one of {cls.CLEF_NAMES}.")

ClefToken

Bases: SignatureToken

ClefToken class.

Source code in kernpy/core/tokens.py
1663
1664
1665
1666
1667
1668
1669
class ClefToken(SignatureToken):
    """
    ClefToken class.
    """

    def __init__(self, encoding):
        super().__init__(encoding, TokenCategory.CLEF)

ComplexToken

Bases: Token, ABC

Abstract ComplexToken class. This abstract class ensures that the subclasses implement the export method using the 'filter_categories' parameter to filter the subtokens.

Passing the argument 'filter_categories' by **kwargs don't break the compatibility with parent classes.

Here we're trying to get the Liskov substitution principle done...

Source code in kernpy/core/tokens.py
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
class ComplexToken(Token, ABC):
    """
    Abstract ComplexToken class. This abstract class ensures that the subclasses implement the export method using\
     the 'filter_categories' parameter to filter the subtokens.

     Passing the argument 'filter_categories' by **kwargs don't break the compatibility with parent classes.

     Here we're trying to get the Liskov substitution principle done...
    """
    def __init__(self, encoding: str, category: TokenCategory):
        """
        Constructor for the ComplexToken

        Args:
            encoding (str): The original representation of the token.
            category (TokenCategory) : The category of the token.
        """
        super().__init__(encoding, category)

    @abstractmethod
    def export(self, **kwargs) -> str:
        """
        Exports the token.

        Keyword Arguments:
            filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
                indicating whether the token should be included in the export. If provided, only tokens for which the
                function returns True will be exported. Defaults to None. If None, all tokens will be exported.

        Returns (str): The exported token.
        """
        pass

__init__(encoding, category)

Constructor for the ComplexToken

Parameters:

Name Type Description Default
encoding str

The original representation of the token.

required
category TokenCategory)

The category of the token.

required
Source code in kernpy/core/tokens.py
1717
1718
1719
1720
1721
1722
1723
1724
1725
def __init__(self, encoding: str, category: TokenCategory):
    """
    Constructor for the ComplexToken

    Args:
        encoding (str): The original representation of the token.
        category (TokenCategory) : The category of the token.
    """
    super().__init__(encoding, category)

export(**kwargs) abstractmethod

Exports the token.

Other Parameters:

Name Type Description
filter_categories Optional[Callable[[TokenCategory], bool]]

A function that takes a TokenCategory and returns a boolean indicating whether the token should be included in the export. If provided, only tokens for which the function returns True will be exported. Defaults to None. If None, all tokens will be exported.

Returns (str): The exported token.

Source code in kernpy/core/tokens.py
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
@abstractmethod
def export(self, **kwargs) -> str:
    """
    Exports the token.

    Keyword Arguments:
        filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
            indicating whether the token should be included in the export. If provided, only tokens for which the
            function returns True will be exported. Defaults to None. If None, all tokens will be exported.

    Returns (str): The exported token.
    """
    pass

CompoundToken

Bases: ComplexToken

Source code in kernpy/core/tokens.py
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
class CompoundToken(ComplexToken):
    def __init__(self, encoding: str, category: TokenCategory, subtokens: List[Subtoken]):
        """
        Args:
            encoding (str): The complete unprocessed encoding
            category (TokenCategory): The token category, one of 'TokenCategory'
            subtokens (List[Subtoken]): The individual elements of the token. Also of type 'TokenCategory' but \
                in the hierarchy they must be children of the current token.
        """
        super().__init__(encoding, category)

        for subtoken in subtokens:
            if not isinstance(subtoken, Subtoken):
                raise ValueError(f'All subtokens must be instances of Subtoken. Found {type(subtoken)}')

        self.subtokens = subtokens

    def export(self, **kwargs) -> str:
        """
        Exports the token.

        Keyword Arguments:
            filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
                indicating whether the token should be included in the export. If provided, only tokens for which the
                function returns True will be exported. Defaults to None. If None, all tokens will be exported.

        Returns (str): The exported token.
        """
        filter_categories_fn = kwargs.get('filter_categories', None)
        parts = []
        for subtoken in self.subtokens:
            # Only export the subtoken if it passes the filter_categories (if provided)
            if filter_categories_fn is None or filter_categories_fn(subtoken.category):
                # parts.append(subtoken.export(**kwargs)) in the future when SubTokens will be Tokens
                parts.append(subtoken.encoding)
        return TOKEN_SEPARATOR.join(parts) if len(parts) > 0 else EMPTY_TOKEN

__init__(encoding, category, subtokens)

Parameters:

Name Type Description Default
encoding str

The complete unprocessed encoding

required
category TokenCategory

The token category, one of 'TokenCategory'

required
subtokens List[Subtoken]

The individual elements of the token. Also of type 'TokenCategory' but in the hierarchy they must be children of the current token.

required
Source code in kernpy/core/tokens.py
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
def __init__(self, encoding: str, category: TokenCategory, subtokens: List[Subtoken]):
    """
    Args:
        encoding (str): The complete unprocessed encoding
        category (TokenCategory): The token category, one of 'TokenCategory'
        subtokens (List[Subtoken]): The individual elements of the token. Also of type 'TokenCategory' but \
            in the hierarchy they must be children of the current token.
    """
    super().__init__(encoding, category)

    for subtoken in subtokens:
        if not isinstance(subtoken, Subtoken):
            raise ValueError(f'All subtokens must be instances of Subtoken. Found {type(subtoken)}')

    self.subtokens = subtokens

export(**kwargs)

Exports the token.

Other Parameters:

Name Type Description
filter_categories Optional[Callable[[TokenCategory], bool]]

A function that takes a TokenCategory and returns a boolean indicating whether the token should be included in the export. If provided, only tokens for which the function returns True will be exported. Defaults to None. If None, all tokens will be exported.

Returns (str): The exported token.

Source code in kernpy/core/tokens.py
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
def export(self, **kwargs) -> str:
    """
    Exports the token.

    Keyword Arguments:
        filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
            indicating whether the token should be included in the export. If provided, only tokens for which the
            function returns True will be exported. Defaults to None. If None, all tokens will be exported.

    Returns (str): The exported token.
    """
    filter_categories_fn = kwargs.get('filter_categories', None)
    parts = []
    for subtoken in self.subtokens:
        # Only export the subtoken if it passes the filter_categories (if provided)
        if filter_categories_fn is None or filter_categories_fn(subtoken.category):
            # parts.append(subtoken.export(**kwargs)) in the future when SubTokens will be Tokens
            parts.append(subtoken.encoding)
    return TOKEN_SEPARATOR.join(parts) if len(parts) > 0 else EMPTY_TOKEN

Document

Document class.

This class store the score content using an agnostic tree structure.

Attributes:

Name Type Description
tree MultistageTree

The tree structure of the document where all the nodes are stored. Each stage of the tree corresponds to a row in the Humdrum **kern file encoding.

measure_start_tree_stages List[List[Node]]

The list of nodes that corresponds to the measures. Empty list by default. The index of the list is starting from 1. Rows after removing empty lines and line comments

page_bounding_boxes Dict[int, BoundingBoxMeasures]

The dictionary of page bounding boxes. - key: page number - value: BoundingBoxMeasures object

header_stage int

The index of the stage that contains the headers. None by default.

Source code in kernpy/core/document.py
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
class Document:
    """
    Document class.

    This class store the score content using an agnostic tree structure.

    Attributes:
        tree (MultistageTree): The tree structure of the document where all the nodes are stored. \
            Each stage of the tree corresponds to a row in the Humdrum **kern file encoding.
        measure_start_tree_stages (List[List[Node]]): The list of nodes that corresponds to the measures. \
            Empty list by default.
            The index of the list is starting from 1. Rows after removing empty lines and line comments
        page_bounding_boxes (Dict[int, BoundingBoxMeasures]): The dictionary of page bounding boxes. \
            - key: page number
            - value: BoundingBoxMeasures object
        header_stage (int): The index of the stage that contains the headers. None by default.
    """

    def __init__(self, tree: MultistageTree):
        """
        Constructor for Document class.

        Args:
            tree (MultistageTree): The tree structure of the document where all the nodes are stored.
        """
        self.tree = tree  # TODO: ? Should we use copy.deepcopy() here?
        self.measure_start_tree_stages = []
        self.page_bounding_boxes = {}
        self.header_stage = None

    FIRST_MEASURE = 1

    def get_header_stage(self) -> Union[List[Node], List[List[Node]]]:
        """
        Get the Node list of the header stage.

        Returns: (Union[List[Node], List[List[Node]]]) The Node list of the header stage.

        Raises: Exception - If the document has no header stage.
        """
        if self.header_stage:
            return self.tree.stages[self.header_stage]
        else:
            raise Exception('No header stage found')

    def get_leaves(self) -> List[Node]:
        """
        Get the leaves of the tree.

        Returns: (List[Node]) The leaves of the tree.
        """
        return self.tree.stages[len(self.tree.stages) - 1]

    def get_spine_count(self) -> int:
        """
        Get the number of spines in the document.

        Returns (int): The number of spines in the document.
        """
        return len(self.get_header_stage())  # TODO: test refactor

    def get_first_measure(self) -> int:
        """
        Get the index of the first measure of the document.

        Returns: (Int) The index of the first measure of the document.

        Raises: Exception - If the document has no measures.

        Examples:
            >>> import kernpy as kp
            >>> document, err = kp.read('score.krn')
            >>> document.get_first_measure()
            1
        """
        if len(self.measure_start_tree_stages) == 0:
            raise Exception('No measures found')

        return self.FIRST_MEASURE

    def measures_count(self) -> int:
        """
        Get the index of the last measure of the document.

        Returns: (Int) The index of the last measure of the document.

        Raises: Exception - If the document has no measures.

        Examples:
            >>> document, _ = kernpy.read('score.krn')
            >>> document.measures_count()
            10
            >>> for i in range(document.get_first_measure(), document.measures_count() + 1):
            >>>   options = kernpy.ExportOptions(from_measure=i, to_measure=i+4)
        """
        if len(self.measure_start_tree_stages) == 0:
            raise Exception('No measures found')

        return len(self.measure_start_tree_stages)

    def get_metacomments(self, KeyComment: Optional[str] = None, clear: bool = False) -> List[str]:
        """
        Get all metacomments in the document

        Args:
            KeyComment: Filter by a specific metacomment key: e.g. Use 'COM' to get only comments starting with\
                '!!!COM: '. If None, all metacomments are returned.
            clear: If True, the metacomment key is removed from the comment. E.g. '!!!COM: Coltrane' -> 'Coltrane'.\
                If False, the metacomment key is kept. E.g. '!!!COM: Coltrane' -> '!!!COM: Coltrane'. \
                The clear functionality is equivalent to the following code:
                ```python
                comment = '!!!COM: Coltrane'
                clean_comment = comment.replace(f"!!!{KeyComment}: ", "")
                ```
                Other formats are not supported.

        Returns: A list of metacomments.

        Examples:
            >>> document.get_metacomments()
            ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
            >>> document.get_metacomments(KeyComment='COM')
            ['!!!COM: Coltrane']
            >>> document.get_metacomments(KeyComment='COM', clear=True)
            ['Coltrane']
            >>> document.get_metacomments(KeyComment='non_existing_key')
            []
        """
        traversal = MetacommentsTraversal()
        self.tree.dfs_iterative(traversal)
        result = []
        for metacomment in traversal.metacomments:
            if KeyComment is None or metacomment.encoding.startswith(f"!!!{KeyComment}"):
                new_comment = metacomment.encoding
                if clear:
                    new_comment = metacomment.encoding.replace(f"!!!{KeyComment}: ", "")
                result.append(new_comment)

        return result

    @classmethod
    def tokens_to_encodings(cls, tokens: Sequence[AbstractToken]):
        """
        Get the encodings of a list of tokens.

        The method is equivalent to the following code:
            >>> tokens = kp.get_all_tokens()
            >>> [token.encoding for token in tokens if token.encoding is not None]

        Args:
            tokens (Sequence[AbstractToken]): list - A list of tokens.

        Returns: List[str] - A list of token encodings.

        Examples:
            >>> tokens = document.get_all_tokens()
            >>> Document.tokens_to_encodings(tokens)
            ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
        """
        encodings = [token.encoding for token in tokens if token.encoding is not None]
        return encodings

    def get_all_tokens(self, filter_by_categories: Optional[Sequence[TokenCategory]] = None) -> List[AbstractToken]:
        """
        Args:
            filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

        Returns:
            List[AbstractToken] - A list of all tokens.

        Examples:
            >>> tokens = document.get_all_tokens()
            >>> Document.tokens_to_encodings(tokens)
            >>> [type(t) for t in tokens]
            [<class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>]
        """
        computed_categories = TokenCategory.valid(include=filter_by_categories)
        traversal = TokensTraversal(False, computed_categories)
        self.tree.dfs_iterative(traversal)
        return traversal.tokens

    def get_all_tokens_encodings(
            self,
            filter_by_categories: Optional[Sequence[TokenCategory]] = None
    ) -> List[str]:
        """
        Args:
            filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.


        Returns:
            list[str] - A list of all token encodings.

        Examples:
            >>> tokens = document.get_all_tokens_encodings()
            >>> Document.tokens_to_encodings(tokens)
            ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
        """
        tokens = self.get_all_tokens(filter_by_categories)
        return Document.tokens_to_encodings(tokens)

    def get_unique_tokens(
            self,
            filter_by_categories: Optional[Sequence[TokenCategory]] = None
    ) -> List[AbstractToken]:
        """
        Get unique tokens.

        Args:
            filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

        Returns:
            List[AbstractToken] - A list of unique tokens.

        """
        computed_categories = TokenCategory.valid(include=filter_by_categories)
        traversal = TokensTraversal(True, computed_categories)
        self.tree.dfs_iterative(traversal)
        return traversal.tokens

    def get_unique_token_encodings(
            self,
            filter_by_categories: Optional[Sequence[TokenCategory]] = None
    ) -> List[str]:
        """
        Get unique token encodings.

        Args:
            filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

        Returns: List[str] - A list of unique token encodings.

        """
        tokens = self.get_unique_tokens(filter_by_categories)
        return Document.tokens_to_encodings(tokens)

    def get_voices(self, clean: bool = False):
        """
        Get the voices of the document.

        Args
            clean (bool): Remove the first '!' from the voice name.

        Returns: A list of voices.

        Examples:
            >>> document.get_voices()
            ['!sax', '!piano', '!bass']
            >>> document.get_voices(clean=True)
            ['sax', 'piano', 'bass']
            >>> document.get_voices(clean=False)
            ['!sax', '!piano', '!bass']
        """
        from kernpy.core import TokenCategory
        voices = self.get_all_tokens(filter_by_categories=[TokenCategory.INSTRUMENTS])

        if clean:
            voices = [voice[1:] for voice in voices]
        return voices

    def clone(self):
        """
        Create a deep copy of the Document instance.

        Returns: A new instance of Document with the tree copied.

        """
        result = Document(copy(self.tree))
        result.measure_start_tree_stages = copy(self.measure_start_tree_stages)
        result.page_bounding_boxes = copy(self.page_bounding_boxes)
        result.header_stage = copy(self.header_stage)

        return result

    def append_spines(self, spines) -> None:
        """
        Append the spines directly to current document tree.

        Args:
            spines(list): A list of spines to append.

        Returns: None

        Examples:
            >>> import kernpy as kp
            >>> doc, _ = kp.read('score.krn')
            >>> spines = [
            >>> '4e\t4f\t4g\t4a\n4b\t4c\t4d\t4e\n=\t=\t=\t=\n',
            >>> '4c\t4d\t4e\t4f\n4g\t4a\t4b\t4c\n=\t=\t=\t=\n',
           >>> ]
           >>> doc.append_spines(spines)
           None
        """
        raise NotImplementedError()
        if len(spines) != self.get_spine_count():
            raise Exception(f"Spines count mismatch: {len(spines)} != {self.get_spine_count()}")

        for spine in spines:
            return

    def add(self, other: 'Document', *, check_core_spines_only: Optional[bool] = False) -> 'Document':
        """
        Concatenate one document to the current document: Modify the current object!

        Args:
            other: The document to concatenate.
            check_core_spines_only: If True, only the core spines (**kern and **mens) are checked. If False, all spines are checked.

        Returns ('Document'): The current document (self) with the other document concatenated.
        """
        if not Document.match(self, other, check_core_spines_only=check_core_spines_only):
            raise Exception(f'Documents are not compatible for addition. '
                            f'Headers do not match with check_core_spines_only={check_core_spines_only}. '
                            f'self: {self.get_header_nodes()}, other: {other.get_header_nodes()}. ')

        current_header_nodes = self.get_header_stage()
        other_header_nodes = other.get_header_stage()

        current_leaf_nodes = self.get_leaves()
        flatten = lambda lst: [item for sublist in lst for item in sublist]
        other_first_level_children = [flatten(c.children) for c in other_header_nodes]  # avoid header stage

        for current_leaf, other_first_level_child in zip(current_leaf_nodes, other_first_level_children, strict=False):
            # Ignore extra spines from other document.
            # But if there are extra spines in the current document, it will raise an exception.
            if current_leaf.token.encoding == TERMINATOR:
                # remove the '*-' token from the current document
                current_leaf_index = current_leaf.parent.children.index(current_leaf)
                current_leaf.parent.children.pop(current_leaf_index)
                current_leaf.parent.children.insert(current_leaf_index, other_first_level_child)

            self.tree.add_node(
                stage=len(self.tree.stages) - 1,  # TODO: check offset 0, +1, -1 ????
                parent=current_leaf,
                token=other_first_level_child.token,
                last_spine_operator_node=other_first_level_child.last_spine_operator_node,
                previous_signature_nodes=other_first_level_child.last_signature_nodes,
                header_node=other_first_level_child.header_node
            )

        return self

    def get_header_nodes(self) -> List[HeaderToken]:
        """
        Get the header nodes of the current document.

        Returns: List[HeaderToken]: A list with the header nodes of the current document.
        """
        return [token for token in self.get_all_tokens(filter_by_categories=None) if isinstance(token, HeaderToken)]

    def get_spine_ids(self) -> List[int]:
        """
                Get the indexes of the current document.

                Returns List[int]: A list with the indexes of the current document.

                Examples:
                    >>> document.get_all_spine_indexes()
                    [0, 1, 2, 3, 4]
                """
        header_nodes = self.get_header_nodes()
        return [node.spine_id for node in header_nodes]

    def frequencies(self, token_categories: Optional[Sequence[TokenCategory]] = None) -> Dict:
        """
        Frequency of tokens in the document.


        Args:
            token_categories (Optional[Sequence[TokenCategory]]): If None, all tokens are considered.
        Returns (Dict):
            A dictionary with the category and the number of occurrences of each token.

        """
        tokens = self.get_all_tokens(filter_by_categories=token_categories)
        frequencies = {}
        for t in tokens:
            if t.encoding in frequencies:
                frequencies[t.encoding]['occurrences'] += 1
            else:
                frequencies[t.encoding] = {
                    'occurrences': 1,
                    'category': t.category.name,
                }

        return frequencies

    def split(self) -> List['Document']:
        """
        Split the current document into a list of documents, one for each **kern spine.
        Each resulting document will contain one **kern spine along with all non-kern spines.

        Returns:
            List['Document']: A list of documents, where each document contains one **kern spine
            and all non-kern spines from the original document.

        Examples:
            >>> document.split()
            [<Document: score.krn>, <Document: score.krn>, <Document: score.krn>]
        """
        raise NotImplementedError
        new_documents = []
        self_document_copy = deepcopy(self)
        kern_header_nodes = [node for node in self_document_copy.get_header_nodes() if node.encoding == '**kern']
        other_header_nodes = [node for node in self_document_copy.get_header_nodes() if node.encoding != '**kern']
        spine_ids = self_document_copy.get_spine_ids()

        for header_node in kern_header_nodes:
            if header_node.spine_id not in spine_ids:
                continue

            spine_ids.remove(header_node.spine_id)

            new_tree = deepcopy(self.tree)
            prev_node = new_tree.root
            while not isinstance(prev_node, HeaderToken):
                prev_node = prev_node.children[0]

            if not prev_node or not isinstance(prev_node, HeaderToken):
                raise Exception(f'Header node not found: {prev_node} in {header_node}')

            new_children = list(filter(lambda x: x.spine_id == header_node.spine_id, prev_node.children))
            new_tree.root = new_children

            new_document = Document(new_tree)

            new_documents.append(new_document)

        return new_documents

    @classmethod
    def to_concat(cls, first_doc: 'Document', second_doc: 'Document', deep_copy: bool = True) -> 'Document':
        """
        Concatenate two documents.

        Args:
            first_doc (Document): The first document.
            second_doc (Document: The second document.
            deep_copy (bool): If True, the documents are deep copied. If False, the documents are shallow copied.

        Returns: A new instance of Document with the documents concatenated.
        """
        first_doc = first_doc.clone() if deep_copy else first_doc
        second_doc = second_doc.clone() if deep_copy else second_doc
        first_doc.add(second_doc)

        return first_doc

    @classmethod
    def match(cls, a: 'Document', b: 'Document', *, check_core_spines_only: Optional[bool] = False) -> bool:
        """
        Match two documents. Two documents match if they have the same spine structure.

        Args:
            a (Document): The first document.
            b (Document): The second document.
            check_core_spines_only (Optional[bool]): If True, only the core spines (**kern and **mens) are checked. If False, all spines are checked.

        Returns: True if the documents match, False otherwise.

        Examples:

        """
        if check_core_spines_only:
            return [token.encoding for token in a.get_header_nodes() if token.encoding in CORE_HEADERS] \
                == [token.encoding for token in b.get_header_nodes() if token.encoding in CORE_HEADERS]
        else:
            return [token.encoding for token in a.get_header_nodes()] \
                == [token.encoding for token in b.get_header_nodes()]


    def to_transposed(self, interval: str, direction: str = Direction.UP.value) -> 'Document':
        """
        Create a new document with the transposed notes without modifying the original document.

        Args:
            interval (str): The name of the interval to transpose. It can be 'P4', 'P5', 'M2', etc. Check the \
             kp.AVAILABLE_INTERVALS for the available intervals.
            direction (str): The direction to transpose. It can be 'up' or 'down'.

        Returns:

        """
        if interval not in AVAILABLE_INTERVALS:
            raise ValueError(
                f"Interval {interval!r} is not available. "
                f"Available intervals are: {AVAILABLE_INTERVALS}"
            )

        if direction not in (Direction.UP.value, Direction.DOWN.value):
            raise ValueError(
                f"Direction {direction!r} is not available. "
                f"Available directions are: "
                f"{Direction.UP.value!r}, {Direction.DOWN.value!r}"
            )

        new_document = self.clone()

        # BFS through the tree
        root = new_document.tree.root
        queue = Queue()
        queue.put(root)

        while not queue.empty():
            node = queue.get()

            if isinstance(node.token, NoteRestToken):
                orig_token = node.token

                new_subtokens = []
                transposed_pitch_encoding = None

                # Transpose each pitch subtoken in the pitch–duration list
                for subtoken in orig_token.pitch_duration_subtokens:
                    if subtoken.category == TokenCategory.PITCH:
                        # transpose() returns a new pitch subtoken
                        tp = transpose(
                            input_encoding=subtoken.encoding,
                            interval=IntervalsByName[interval],
                            direction=direction,
                            input_format=NotationEncoding.HUMDRUM.value,
                            output_format=NotationEncoding.HUMDRUM.value,
                        )
                        new_subtokens.append(Subtoken(tp, subtoken.category))
                        transposed_pitch_encoding = tp
                    else:
                        # leave duration subtokens untouched
                        new_subtokens.append(Subtoken(subtoken.encoding, subtoken.category))

                # Replace the node’s token with a new NoteRestToken
                node.token = NoteRestToken(
                    encoding=transposed_pitch_encoding,
                    pitch_duration_subtokens=new_subtokens,
                    decoration_subtokens=orig_token.decoration_subtokens,
                )

            # enqueue children
            for child in node.children:
                queue.put(child)

        # Return the transposed clone
        return new_document


    def __iter__(self):
        """
        Get the indexes to export all the document.

        Returns: An iterator with the indexes to export the document.
        """
        return iter(range(self.get_first_measure(), self.measures_count() + 1))

    def __next__(self):
        """
        Get the next index to export the document.

        Returns: The next index to export the document.
        """
        return next(iter(range(self.get_first_measure(), self.measures_count() + 1)))

__init__(tree)

Constructor for Document class.

Parameters:

Name Type Description Default
tree MultistageTree

The tree structure of the document where all the nodes are stored.

required
Source code in kernpy/core/document.py
356
357
358
359
360
361
362
363
364
365
366
def __init__(self, tree: MultistageTree):
    """
    Constructor for Document class.

    Args:
        tree (MultistageTree): The tree structure of the document where all the nodes are stored.
    """
    self.tree = tree  # TODO: ? Should we use copy.deepcopy() here?
    self.measure_start_tree_stages = []
    self.page_bounding_boxes = {}
    self.header_stage = None

__iter__()

Get the indexes to export all the document.

Returns: An iterator with the indexes to export the document.

Source code in kernpy/core/document.py
882
883
884
885
886
887
888
def __iter__(self):
    """
    Get the indexes to export all the document.

    Returns: An iterator with the indexes to export the document.
    """
    return iter(range(self.get_first_measure(), self.measures_count() + 1))

__next__()

Get the next index to export the document.

Returns: The next index to export the document.

Source code in kernpy/core/document.py
890
891
892
893
894
895
896
def __next__(self):
    """
    Get the next index to export the document.

    Returns: The next index to export the document.
    """
    return next(iter(range(self.get_first_measure(), self.measures_count() + 1)))

add(other, *, check_core_spines_only=False)

Concatenate one document to the current document: Modify the current object!

Parameters:

Name Type Description Default
other 'Document'

The document to concatenate.

required
check_core_spines_only Optional[bool]

If True, only the core spines (kern and mens) are checked. If False, all spines are checked.

False

Returns ('Document'): The current document (self) with the other document concatenated.

Source code in kernpy/core/document.py
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
def add(self, other: 'Document', *, check_core_spines_only: Optional[bool] = False) -> 'Document':
    """
    Concatenate one document to the current document: Modify the current object!

    Args:
        other: The document to concatenate.
        check_core_spines_only: If True, only the core spines (**kern and **mens) are checked. If False, all spines are checked.

    Returns ('Document'): The current document (self) with the other document concatenated.
    """
    if not Document.match(self, other, check_core_spines_only=check_core_spines_only):
        raise Exception(f'Documents are not compatible for addition. '
                        f'Headers do not match with check_core_spines_only={check_core_spines_only}. '
                        f'self: {self.get_header_nodes()}, other: {other.get_header_nodes()}. ')

    current_header_nodes = self.get_header_stage()
    other_header_nodes = other.get_header_stage()

    current_leaf_nodes = self.get_leaves()
    flatten = lambda lst: [item for sublist in lst for item in sublist]
    other_first_level_children = [flatten(c.children) for c in other_header_nodes]  # avoid header stage

    for current_leaf, other_first_level_child in zip(current_leaf_nodes, other_first_level_children, strict=False):
        # Ignore extra spines from other document.
        # But if there are extra spines in the current document, it will raise an exception.
        if current_leaf.token.encoding == TERMINATOR:
            # remove the '*-' token from the current document
            current_leaf_index = current_leaf.parent.children.index(current_leaf)
            current_leaf.parent.children.pop(current_leaf_index)
            current_leaf.parent.children.insert(current_leaf_index, other_first_level_child)

        self.tree.add_node(
            stage=len(self.tree.stages) - 1,  # TODO: check offset 0, +1, -1 ????
            parent=current_leaf,
            token=other_first_level_child.token,
            last_spine_operator_node=other_first_level_child.last_spine_operator_node,
            previous_signature_nodes=other_first_level_child.last_signature_nodes,
            header_node=other_first_level_child.header_node
        )

    return self

append_spines(spines)

    Append the spines directly to current document tree.

    Args:
        spines(list): A list of spines to append.

    Returns: None

    Examples:
        >>> import kernpy as kp
        >>> doc, _ = kp.read('score.krn')
        >>> spines = [
        >>> '4e     4f      4g      4a

4b 4c 4d 4e = = = = ', >>> '4c 4d 4e 4f 4g 4a 4b 4c = = = = ', >>> ] >>> doc.append_spines(spines) None

Source code in kernpy/core/document.py
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
def append_spines(self, spines) -> None:
    """
    Append the spines directly to current document tree.

    Args:
        spines(list): A list of spines to append.

    Returns: None

    Examples:
        >>> import kernpy as kp
        >>> doc, _ = kp.read('score.krn')
        >>> spines = [
        >>> '4e\t4f\t4g\t4a\n4b\t4c\t4d\t4e\n=\t=\t=\t=\n',
        >>> '4c\t4d\t4e\t4f\n4g\t4a\t4b\t4c\n=\t=\t=\t=\n',
       >>> ]
       >>> doc.append_spines(spines)
       None
    """
    raise NotImplementedError()
    if len(spines) != self.get_spine_count():
        raise Exception(f"Spines count mismatch: {len(spines)} != {self.get_spine_count()}")

    for spine in spines:
        return

clone()

Create a deep copy of the Document instance.

Returns: A new instance of Document with the tree copied.

Source code in kernpy/core/document.py
598
599
600
601
602
603
604
605
606
607
608
609
610
def clone(self):
    """
    Create a deep copy of the Document instance.

    Returns: A new instance of Document with the tree copied.

    """
    result = Document(copy(self.tree))
    result.measure_start_tree_stages = copy(self.measure_start_tree_stages)
    result.page_bounding_boxes = copy(self.page_bounding_boxes)
    result.header_stage = copy(self.header_stage)

    return result

frequencies(token_categories=None)

Frequency of tokens in the document.

Parameters:

Name Type Description Default
token_categories Optional[Sequence[TokenCategory]]

If None, all tokens are considered.

None

Returns (Dict): A dictionary with the category and the number of occurrences of each token.

Source code in kernpy/core/document.py
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
def frequencies(self, token_categories: Optional[Sequence[TokenCategory]] = None) -> Dict:
    """
    Frequency of tokens in the document.


    Args:
        token_categories (Optional[Sequence[TokenCategory]]): If None, all tokens are considered.
    Returns (Dict):
        A dictionary with the category and the number of occurrences of each token.

    """
    tokens = self.get_all_tokens(filter_by_categories=token_categories)
    frequencies = {}
    for t in tokens:
        if t.encoding in frequencies:
            frequencies[t.encoding]['occurrences'] += 1
        else:
            frequencies[t.encoding] = {
                'occurrences': 1,
                'category': t.category.name,
            }

    return frequencies

get_all_tokens(filter_by_categories=None)

Parameters:

Name Type Description Default
filter_by_categories Optional[Sequence[TokenCategory]]

A list of categories to filter the tokens. If None, all tokens are returned.

None

Returns:

Type Description
List[AbstractToken]

List[AbstractToken] - A list of all tokens.

Examples:

>>> tokens = document.get_all_tokens()
>>> Document.tokens_to_encodings(tokens)
>>> [type(t) for t in tokens]
[<class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>]
Source code in kernpy/core/document.py
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
def get_all_tokens(self, filter_by_categories: Optional[Sequence[TokenCategory]] = None) -> List[AbstractToken]:
    """
    Args:
        filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

    Returns:
        List[AbstractToken] - A list of all tokens.

    Examples:
        >>> tokens = document.get_all_tokens()
        >>> Document.tokens_to_encodings(tokens)
        >>> [type(t) for t in tokens]
        [<class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>, <class 'kernpy.core.token.Token'>]
    """
    computed_categories = TokenCategory.valid(include=filter_by_categories)
    traversal = TokensTraversal(False, computed_categories)
    self.tree.dfs_iterative(traversal)
    return traversal.tokens

get_all_tokens_encodings(filter_by_categories=None)

Parameters:

Name Type Description Default
filter_by_categories Optional[Sequence[TokenCategory]]

A list of categories to filter the tokens. If None, all tokens are returned.

None

Returns:

Type Description
List[str]

list[str] - A list of all token encodings.

Examples:

>>> tokens = document.get_all_tokens_encodings()
>>> Document.tokens_to_encodings(tokens)
['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
Source code in kernpy/core/document.py
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
def get_all_tokens_encodings(
        self,
        filter_by_categories: Optional[Sequence[TokenCategory]] = None
) -> List[str]:
    """
    Args:
        filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.


    Returns:
        list[str] - A list of all token encodings.

    Examples:
        >>> tokens = document.get_all_tokens_encodings()
        >>> Document.tokens_to_encodings(tokens)
        ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
    """
    tokens = self.get_all_tokens(filter_by_categories)
    return Document.tokens_to_encodings(tokens)

get_first_measure()

Get the index of the first measure of the document.

Returns: (Int) The index of the first measure of the document.

Raises: Exception - If the document has no measures.

Examples:

>>> import kernpy as kp
>>> document, err = kp.read('score.krn')
>>> document.get_first_measure()
1
Source code in kernpy/core/document.py
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
def get_first_measure(self) -> int:
    """
    Get the index of the first measure of the document.

    Returns: (Int) The index of the first measure of the document.

    Raises: Exception - If the document has no measures.

    Examples:
        >>> import kernpy as kp
        >>> document, err = kp.read('score.krn')
        >>> document.get_first_measure()
        1
    """
    if len(self.measure_start_tree_stages) == 0:
        raise Exception('No measures found')

    return self.FIRST_MEASURE

get_header_nodes()

Get the header nodes of the current document.

Returns: List[HeaderToken]: A list with the header nodes of the current document.

Source code in kernpy/core/document.py
680
681
682
683
684
685
686
def get_header_nodes(self) -> List[HeaderToken]:
    """
    Get the header nodes of the current document.

    Returns: List[HeaderToken]: A list with the header nodes of the current document.
    """
    return [token for token in self.get_all_tokens(filter_by_categories=None) if isinstance(token, HeaderToken)]

get_header_stage()

Get the Node list of the header stage.

Returns: (Union[List[Node], List[List[Node]]]) The Node list of the header stage.

Raises: Exception - If the document has no header stage.

Source code in kernpy/core/document.py
370
371
372
373
374
375
376
377
378
379
380
381
def get_header_stage(self) -> Union[List[Node], List[List[Node]]]:
    """
    Get the Node list of the header stage.

    Returns: (Union[List[Node], List[List[Node]]]) The Node list of the header stage.

    Raises: Exception - If the document has no header stage.
    """
    if self.header_stage:
        return self.tree.stages[self.header_stage]
    else:
        raise Exception('No header stage found')

get_leaves()

Get the leaves of the tree.

Returns: (List[Node]) The leaves of the tree.

Source code in kernpy/core/document.py
383
384
385
386
387
388
389
def get_leaves(self) -> List[Node]:
    """
    Get the leaves of the tree.

    Returns: (List[Node]) The leaves of the tree.
    """
    return self.tree.stages[len(self.tree.stages) - 1]

get_metacomments(KeyComment=None, clear=False)

Get all metacomments in the document

Parameters:

Name Type Description Default
KeyComment Optional[str]

Filter by a specific metacomment key: e.g. Use 'COM' to get only comments starting with '!!!COM: '. If None, all metacomments are returned.

None
clear bool

If True, the metacomment key is removed from the comment. E.g. '!!!COM: Coltrane' -> 'Coltrane'. If False, the metacomment key is kept. E.g. '!!!COM: Coltrane' -> '!!!COM: Coltrane'. The clear functionality is equivalent to the following code:

comment = '!!!COM: Coltrane'
clean_comment = comment.replace(f"!!!{KeyComment}: ", "")

Other formats are not supported.

False

Returns: A list of metacomments.

Examples:

>>> document.get_metacomments()
['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
>>> document.get_metacomments(KeyComment='COM')
['!!!COM: Coltrane']
>>> document.get_metacomments(KeyComment='COM', clear=True)
['Coltrane']
>>> document.get_metacomments(KeyComment='non_existing_key')
[]
Source code in kernpy/core/document.py
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
def get_metacomments(self, KeyComment: Optional[str] = None, clear: bool = False) -> List[str]:
    """
    Get all metacomments in the document

    Args:
        KeyComment: Filter by a specific metacomment key: e.g. Use 'COM' to get only comments starting with\
            '!!!COM: '. If None, all metacomments are returned.
        clear: If True, the metacomment key is removed from the comment. E.g. '!!!COM: Coltrane' -> 'Coltrane'.\
            If False, the metacomment key is kept. E.g. '!!!COM: Coltrane' -> '!!!COM: Coltrane'. \
            The clear functionality is equivalent to the following code:
            ```python
            comment = '!!!COM: Coltrane'
            clean_comment = comment.replace(f"!!!{KeyComment}: ", "")
            ```
            Other formats are not supported.

    Returns: A list of metacomments.

    Examples:
        >>> document.get_metacomments()
        ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
        >>> document.get_metacomments(KeyComment='COM')
        ['!!!COM: Coltrane']
        >>> document.get_metacomments(KeyComment='COM', clear=True)
        ['Coltrane']
        >>> document.get_metacomments(KeyComment='non_existing_key')
        []
    """
    traversal = MetacommentsTraversal()
    self.tree.dfs_iterative(traversal)
    result = []
    for metacomment in traversal.metacomments:
        if KeyComment is None or metacomment.encoding.startswith(f"!!!{KeyComment}"):
            new_comment = metacomment.encoding
            if clear:
                new_comment = metacomment.encoding.replace(f"!!!{KeyComment}: ", "")
            result.append(new_comment)

    return result

get_spine_count()

Get the number of spines in the document.

Returns (int): The number of spines in the document.

Source code in kernpy/core/document.py
391
392
393
394
395
396
397
def get_spine_count(self) -> int:
    """
    Get the number of spines in the document.

    Returns (int): The number of spines in the document.
    """
    return len(self.get_header_stage())  # TODO: test refactor

get_spine_ids()

Get the indexes of the current document.

Returns List[int]: A list with the indexes of the current document.

Examples:

>>> document.get_all_spine_indexes()
[0, 1, 2, 3, 4]
Source code in kernpy/core/document.py
688
689
690
691
692
693
694
695
696
697
698
699
def get_spine_ids(self) -> List[int]:
    """
            Get the indexes of the current document.

            Returns List[int]: A list with the indexes of the current document.

            Examples:
                >>> document.get_all_spine_indexes()
                [0, 1, 2, 3, 4]
            """
    header_nodes = self.get_header_nodes()
    return [node.spine_id for node in header_nodes]

get_unique_token_encodings(filter_by_categories=None)

Get unique token encodings.

Parameters:

Name Type Description Default
filter_by_categories Optional[Sequence[TokenCategory]]

A list of categories to filter the tokens. If None, all tokens are returned.

None

Returns: List[str] - A list of unique token encodings.

Source code in kernpy/core/document.py
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
def get_unique_token_encodings(
        self,
        filter_by_categories: Optional[Sequence[TokenCategory]] = None
) -> List[str]:
    """
    Get unique token encodings.

    Args:
        filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

    Returns: List[str] - A list of unique token encodings.

    """
    tokens = self.get_unique_tokens(filter_by_categories)
    return Document.tokens_to_encodings(tokens)

get_unique_tokens(filter_by_categories=None)

Get unique tokens.

Parameters:

Name Type Description Default
filter_by_categories Optional[Sequence[TokenCategory]]

A list of categories to filter the tokens. If None, all tokens are returned.

None

Returns:

Type Description
List[AbstractToken]

List[AbstractToken] - A list of unique tokens.

Source code in kernpy/core/document.py
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
def get_unique_tokens(
        self,
        filter_by_categories: Optional[Sequence[TokenCategory]] = None
) -> List[AbstractToken]:
    """
    Get unique tokens.

    Args:
        filter_by_categories (Optional[Sequence[TokenCategory]]): A list of categories to filter the tokens. If None, all tokens are returned.

    Returns:
        List[AbstractToken] - A list of unique tokens.

    """
    computed_categories = TokenCategory.valid(include=filter_by_categories)
    traversal = TokensTraversal(True, computed_categories)
    self.tree.dfs_iterative(traversal)
    return traversal.tokens

get_voices(clean=False)

Get the voices of the document.

Args clean (bool): Remove the first '!' from the voice name.

Returns: A list of voices.

Examples:

>>> document.get_voices()
['!sax', '!piano', '!bass']
>>> document.get_voices(clean=True)
['sax', 'piano', 'bass']
>>> document.get_voices(clean=False)
['!sax', '!piano', '!bass']
Source code in kernpy/core/document.py
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
def get_voices(self, clean: bool = False):
    """
    Get the voices of the document.

    Args
        clean (bool): Remove the first '!' from the voice name.

    Returns: A list of voices.

    Examples:
        >>> document.get_voices()
        ['!sax', '!piano', '!bass']
        >>> document.get_voices(clean=True)
        ['sax', 'piano', 'bass']
        >>> document.get_voices(clean=False)
        ['!sax', '!piano', '!bass']
    """
    from kernpy.core import TokenCategory
    voices = self.get_all_tokens(filter_by_categories=[TokenCategory.INSTRUMENTS])

    if clean:
        voices = [voice[1:] for voice in voices]
    return voices

match(a, b, *, check_core_spines_only=False) classmethod

Match two documents. Two documents match if they have the same spine structure.

Parameters:

Name Type Description Default
a Document

The first document.

required
b Document

The second document.

required
check_core_spines_only Optional[bool]

If True, only the core spines (kern and mens) are checked. If False, all spines are checked.

False

Returns: True if the documents match, False otherwise.

Examples:

Source code in kernpy/core/document.py
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
@classmethod
def match(cls, a: 'Document', b: 'Document', *, check_core_spines_only: Optional[bool] = False) -> bool:
    """
    Match two documents. Two documents match if they have the same spine structure.

    Args:
        a (Document): The first document.
        b (Document): The second document.
        check_core_spines_only (Optional[bool]): If True, only the core spines (**kern and **mens) are checked. If False, all spines are checked.

    Returns: True if the documents match, False otherwise.

    Examples:

    """
    if check_core_spines_only:
        return [token.encoding for token in a.get_header_nodes() if token.encoding in CORE_HEADERS] \
            == [token.encoding for token in b.get_header_nodes() if token.encoding in CORE_HEADERS]
    else:
        return [token.encoding for token in a.get_header_nodes()] \
            == [token.encoding for token in b.get_header_nodes()]

measures_count()

Get the index of the last measure of the document.

Returns: (Int) The index of the last measure of the document.

Raises: Exception - If the document has no measures.

Examples:

>>> document, _ = kernpy.read('score.krn')
>>> document.measures_count()
10
>>> for i in range(document.get_first_measure(), document.measures_count() + 1):
>>>   options = kernpy.ExportOptions(from_measure=i, to_measure=i+4)
Source code in kernpy/core/document.py
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
def measures_count(self) -> int:
    """
    Get the index of the last measure of the document.

    Returns: (Int) The index of the last measure of the document.

    Raises: Exception - If the document has no measures.

    Examples:
        >>> document, _ = kernpy.read('score.krn')
        >>> document.measures_count()
        10
        >>> for i in range(document.get_first_measure(), document.measures_count() + 1):
        >>>   options = kernpy.ExportOptions(from_measure=i, to_measure=i+4)
    """
    if len(self.measure_start_tree_stages) == 0:
        raise Exception('No measures found')

    return len(self.measure_start_tree_stages)

split()

Split the current document into a list of documents, one for each kern spine. Each resulting document will contain one kern spine along with all non-kern spines.

Returns:

Type Description
List['Document']

List['Document']: A list of documents, where each document contains one **kern spine

List['Document']

and all non-kern spines from the original document.

Examples:

>>> document.split()
[<Document: score.krn>, <Document: score.krn>, <Document: score.krn>]
Source code in kernpy/core/document.py
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
def split(self) -> List['Document']:
    """
    Split the current document into a list of documents, one for each **kern spine.
    Each resulting document will contain one **kern spine along with all non-kern spines.

    Returns:
        List['Document']: A list of documents, where each document contains one **kern spine
        and all non-kern spines from the original document.

    Examples:
        >>> document.split()
        [<Document: score.krn>, <Document: score.krn>, <Document: score.krn>]
    """
    raise NotImplementedError
    new_documents = []
    self_document_copy = deepcopy(self)
    kern_header_nodes = [node for node in self_document_copy.get_header_nodes() if node.encoding == '**kern']
    other_header_nodes = [node for node in self_document_copy.get_header_nodes() if node.encoding != '**kern']
    spine_ids = self_document_copy.get_spine_ids()

    for header_node in kern_header_nodes:
        if header_node.spine_id not in spine_ids:
            continue

        spine_ids.remove(header_node.spine_id)

        new_tree = deepcopy(self.tree)
        prev_node = new_tree.root
        while not isinstance(prev_node, HeaderToken):
            prev_node = prev_node.children[0]

        if not prev_node or not isinstance(prev_node, HeaderToken):
            raise Exception(f'Header node not found: {prev_node} in {header_node}')

        new_children = list(filter(lambda x: x.spine_id == header_node.spine_id, prev_node.children))
        new_tree.root = new_children

        new_document = Document(new_tree)

        new_documents.append(new_document)

    return new_documents

to_concat(first_doc, second_doc, deep_copy=True) classmethod

Concatenate two documents.

Parameters:

Name Type Description Default
first_doc Document

The first document.

required
second_doc Document

The second document.

required
deep_copy bool

If True, the documents are deep copied. If False, the documents are shallow copied.

True

Returns: A new instance of Document with the documents concatenated.

Source code in kernpy/core/document.py
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
@classmethod
def to_concat(cls, first_doc: 'Document', second_doc: 'Document', deep_copy: bool = True) -> 'Document':
    """
    Concatenate two documents.

    Args:
        first_doc (Document): The first document.
        second_doc (Document: The second document.
        deep_copy (bool): If True, the documents are deep copied. If False, the documents are shallow copied.

    Returns: A new instance of Document with the documents concatenated.
    """
    first_doc = first_doc.clone() if deep_copy else first_doc
    second_doc = second_doc.clone() if deep_copy else second_doc
    first_doc.add(second_doc)

    return first_doc

to_transposed(interval, direction=Direction.UP.value)

Create a new document with the transposed notes without modifying the original document.

Parameters:

Name Type Description Default
interval str

The name of the interval to transpose. It can be 'P4', 'P5', 'M2', etc. Check the kp.AVAILABLE_INTERVALS for the available intervals.

required
direction str

The direction to transpose. It can be 'up' or 'down'.

UP.value

Returns:

Source code in kernpy/core/document.py
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
def to_transposed(self, interval: str, direction: str = Direction.UP.value) -> 'Document':
    """
    Create a new document with the transposed notes without modifying the original document.

    Args:
        interval (str): The name of the interval to transpose. It can be 'P4', 'P5', 'M2', etc. Check the \
         kp.AVAILABLE_INTERVALS for the available intervals.
        direction (str): The direction to transpose. It can be 'up' or 'down'.

    Returns:

    """
    if interval not in AVAILABLE_INTERVALS:
        raise ValueError(
            f"Interval {interval!r} is not available. "
            f"Available intervals are: {AVAILABLE_INTERVALS}"
        )

    if direction not in (Direction.UP.value, Direction.DOWN.value):
        raise ValueError(
            f"Direction {direction!r} is not available. "
            f"Available directions are: "
            f"{Direction.UP.value!r}, {Direction.DOWN.value!r}"
        )

    new_document = self.clone()

    # BFS through the tree
    root = new_document.tree.root
    queue = Queue()
    queue.put(root)

    while not queue.empty():
        node = queue.get()

        if isinstance(node.token, NoteRestToken):
            orig_token = node.token

            new_subtokens = []
            transposed_pitch_encoding = None

            # Transpose each pitch subtoken in the pitch–duration list
            for subtoken in orig_token.pitch_duration_subtokens:
                if subtoken.category == TokenCategory.PITCH:
                    # transpose() returns a new pitch subtoken
                    tp = transpose(
                        input_encoding=subtoken.encoding,
                        interval=IntervalsByName[interval],
                        direction=direction,
                        input_format=NotationEncoding.HUMDRUM.value,
                        output_format=NotationEncoding.HUMDRUM.value,
                    )
                    new_subtokens.append(Subtoken(tp, subtoken.category))
                    transposed_pitch_encoding = tp
                else:
                    # leave duration subtokens untouched
                    new_subtokens.append(Subtoken(subtoken.encoding, subtoken.category))

            # Replace the node’s token with a new NoteRestToken
            node.token = NoteRestToken(
                encoding=transposed_pitch_encoding,
                pitch_duration_subtokens=new_subtokens,
                decoration_subtokens=orig_token.decoration_subtokens,
            )

        # enqueue children
        for child in node.children:
            queue.put(child)

    # Return the transposed clone
    return new_document

tokens_to_encodings(tokens) classmethod

Get the encodings of a list of tokens.

The method is equivalent to the following code

tokens = kp.get_all_tokens() [token.encoding for token in tokens if token.encoding is not None]

Parameters:

Name Type Description Default
tokens Sequence[AbstractToken]

list - A list of tokens.

required

Returns: List[str] - A list of token encodings.

Examples:

>>> tokens = document.get_all_tokens()
>>> Document.tokens_to_encodings(tokens)
['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
Source code in kernpy/core/document.py
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
@classmethod
def tokens_to_encodings(cls, tokens: Sequence[AbstractToken]):
    """
    Get the encodings of a list of tokens.

    The method is equivalent to the following code:
        >>> tokens = kp.get_all_tokens()
        >>> [token.encoding for token in tokens if token.encoding is not None]

    Args:
        tokens (Sequence[AbstractToken]): list - A list of tokens.

    Returns: List[str] - A list of token encodings.

    Examples:
        >>> tokens = document.get_all_tokens()
        >>> Document.tokens_to_encodings(tokens)
        ['!!!COM: Coltrane', '!!!voices: 1', '!!!OPR: Blue Train']
    """
    encodings = [token.encoding for token in tokens if token.encoding is not None]
    return encodings

Duration

Bases: ABC

Represents the duration of a note or a rest.

The duration is represented using the Humdrum Kern format. The duration is a number that represents the number of units of the duration.

The duration of a whole note is 1, half note is 2, quarter note is 4, eighth note is 8, etc.

The duration of a note is represented by a number. The duration of a rest is also represented by a number.

This class do not limit the duration ranges.

In the following example, the duration is represented by the number '2'.

**kern
*clefG2
2c          // whole note
4c          // half note
8c          // quarter note
16c         // eighth note
*-
Source code in kernpy/core/tokens.py
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
class Duration(ABC):
    """
    Represents the duration of a note or a rest.

    The duration is represented using the Humdrum Kern format.
    The duration is a number that represents the number of units of the duration.

    The duration of a whole note is 1, half note is 2, quarter note is 4, eighth note is 8, etc.

    The duration of a note is represented by a number. The duration of a rest is also represented by a number.

    This class do not limit the duration ranges.

    In the following example, the duration is represented by the number '2'.
    ```
    **kern
    *clefG2
    2c          // whole note
    4c          // half note
    8c          // quarter note
    16c         // eighth note
    *-
    ```
    """

    def __init__(self, raw_duration):
        self.encoding = str(raw_duration)

    @abstractmethod
    def modify(self, ratio: int):
        pass

    @abstractmethod
    def __deepcopy__(self, memo=None):
        pass

    @abstractmethod
    def __eq__(self, other):
        pass

    @abstractmethod
    def __ne__(self, other):
        pass

    @abstractmethod
    def __gt__(self, other):
        pass

    @abstractmethod
    def __lt__(self, other):
        pass

    @abstractmethod
    def __ge__(self, other):
        pass

    @abstractmethod
    def __le__(self, other):
        pass

    @abstractmethod
    def __str__(self):
        pass

DurationClassical

Bases: Duration

Represents the duration in classical notation of a note or a rest.

Source code in kernpy/core/tokens.py
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
class DurationClassical(Duration):
    """
    Represents the duration in classical notation of a note or a rest.
    """

    def __init__(self, duration: int):
        """
        Create a new Duration object.

        Args:
            duration (str): duration representation in Humdrum Kern format

        Examples:
            >>> duration = DurationClassical(2)
            True
            >>> duration = DurationClassical(4)
            True
            >>> duration = DurationClassical(32)
            True
            >>> duration = DurationClassical(1)
            True
            >>> duration = DurationClassical(0)
            False
            >>> duration = DurationClassical(-2)
            False
            >>> duration = DurationClassical(3)
            False
            >>> duration = DurationClassical(7)
            False
        """
        super().__init__(duration)
        if not DurationClassical.__is_valid_duration(duration):
            raise ValueError(f'Bad duration: {duration} was provided.')

        self.duration = int(duration)

    def modify(self, ratio: int):
        """
        Modify the duration of a note or a rest of the current object.

        Args:
            ratio (int): The factor to modify the duration. The factor must be greater than 0.

        Returns (DurationClassical): The new duration object with the modified duration.

        Examples:
            >>> duration = DurationClassical(2)
            >>> new_duration = duration.modify(2)
            >>> new_duration.duration
            4
            >>> duration = DurationClassical(2)
            >>> new_duration = duration.modify(0)
            Traceback (most recent call last):
            ...
            ValueError: Invalid factor provided: 0. The factor must be greater than 0.
            >>> duration = DurationClassical(2)
            >>> new_duration = duration.modify(-2)
            Traceback (most recent call last):
            ...
            ValueError: Invalid factor provided: -2. The factor must be greater than 0.
        """
        if not isinstance(ratio, int):
            raise ValueError(f'Invalid factor provided: {ratio}. The factor must be an integer.')
        if ratio <= 0:
            raise ValueError(f'Invalid factor provided: {ratio}. The factor must be greater than 0.')

        return copy.deepcopy(DurationClassical(self.duration * ratio))

    def __deepcopy__(self, memo=None):
        if memo is None:
            memo = {}

        new_instance = DurationClassical(self.duration)
        new_instance.duration = self.duration
        return new_instance

    def __str__(self):
        return f'{self.duration}'

    def __eq__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other (DurationClassical): The other duration to compare

        Returns (bool): True if the durations are equal, False otherwise


        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(2)
            >>> duration == duration2
            True
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration == duration2
            False
        """
        if not isinstance(other, DurationClassical):
            return False
        return self.duration == other.duration

    def __ne__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other (DurationClassical): The other duration to compare

        Returns (bool):
            True if the durations are different, False otherwise

        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(2)
            >>> duration != duration2
            False
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration != duration2
            True
        """
        return not self.__eq__(other)

    def __gt__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other: The other duration to compare

        Returns (bool):
            True if this duration is higher than the other, False otherwise

        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration > duration2
            False
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(2)
            >>> duration > duration2
            True
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(4)
            >>> duration > duration2
            False
        """
        if not isinstance(other, DurationClassical):
            raise ValueError(f'Invalid comparison: > operator can not be used to compare duration with {type(other)}')
        return self.duration > other.duration

    def __lt__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other (DurationClassical): The other duration to compare

        Returns (bool):
            True if this duration is lower than the other, False otherwise

        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration < duration2
            True
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(2)
            >>> duration < duration2
            False
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(4)
            >>> duration < duration2
            False
        """
        if not isinstance(other, DurationClassical):
            raise ValueError(f'Invalid comparison: < operator can not be used to compare duration with {type(other)}')
        return self.duration < other.duration

    def __ge__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other (DurationClassical): The other duration to compare

        Returns (bool):
            True if this duration is higher or equal than the other, False otherwise

        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration >= duration2
            False
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(2)
            >>> duration >= duration2
            True
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(4)
            >>> duration >= duration2
            True
        """
        return self.__gt__(other) or self.__eq__(other)

    def __le__(self, other: 'DurationClassical') -> bool:
        """
        Compare two durations.

        Args:
            other (DurationClassical): The other duration to compare

        Returns:
            True if this duration is lower or equal than the other, False otherwise

        Examples:
            >>> duration = DurationClassical(2)
            >>> duration2 = DurationClassical(4)
            >>> duration <= duration2
            True
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(2)
            >>> duration <= duration2
            False
            >>> duration = DurationClassical(4)
            >>> duration2 = DurationClassical(4)
            >>> duration <= duration2
            True
        """
        return self.__lt__(other) or self.__eq__(other)

    @classmethod
    def __is_valid_duration(cls, duration: int) -> bool:
        try:
            duration = int(duration)
            if duration is None or duration <= 0:
                return False

            return duration > 0 and (duration % 2 == 0 or duration == 1)
        except ValueError:
            return False

__eq__(other)

Compare two durations.

Parameters:

Name Type Description Default
other DurationClassical

The other duration to compare

required

Returns (bool): True if the durations are equal, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(2)
>>> duration == duration2
True
>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration == duration2
False
Source code in kernpy/core/tokens.py
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
def __eq__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other (DurationClassical): The other duration to compare

    Returns (bool): True if the durations are equal, False otherwise


    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(2)
        >>> duration == duration2
        True
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration == duration2
        False
    """
    if not isinstance(other, DurationClassical):
        return False
    return self.duration == other.duration

__ge__(other)

Compare two durations.

Parameters:

Name Type Description Default
other DurationClassical

The other duration to compare

required

Returns (bool): True if this duration is higher or equal than the other, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration >= duration2
False
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(2)
>>> duration >= duration2
True
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(4)
>>> duration >= duration2
True
Source code in kernpy/core/tokens.py
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
def __ge__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other (DurationClassical): The other duration to compare

    Returns (bool):
        True if this duration is higher or equal than the other, False otherwise

    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration >= duration2
        False
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(2)
        >>> duration >= duration2
        True
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(4)
        >>> duration >= duration2
        True
    """
    return self.__gt__(other) or self.__eq__(other)

__gt__(other)

Compare two durations.

Parameters:

Name Type Description Default
other 'DurationClassical'

The other duration to compare

required

Returns (bool): True if this duration is higher than the other, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration > duration2
False
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(2)
>>> duration > duration2
True
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(4)
>>> duration > duration2
False
Source code in kernpy/core/tokens.py
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
def __gt__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other: The other duration to compare

    Returns (bool):
        True if this duration is higher than the other, False otherwise

    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration > duration2
        False
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(2)
        >>> duration > duration2
        True
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(4)
        >>> duration > duration2
        False
    """
    if not isinstance(other, DurationClassical):
        raise ValueError(f'Invalid comparison: > operator can not be used to compare duration with {type(other)}')
    return self.duration > other.duration

__init__(duration)

Create a new Duration object.

Parameters:

Name Type Description Default
duration str

duration representation in Humdrum Kern format

required

Examples:

>>> duration = DurationClassical(2)
True
>>> duration = DurationClassical(4)
True
>>> duration = DurationClassical(32)
True
>>> duration = DurationClassical(1)
True
>>> duration = DurationClassical(0)
False
>>> duration = DurationClassical(-2)
False
>>> duration = DurationClassical(3)
False
>>> duration = DurationClassical(7)
False
Source code in kernpy/core/tokens.py
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
def __init__(self, duration: int):
    """
    Create a new Duration object.

    Args:
        duration (str): duration representation in Humdrum Kern format

    Examples:
        >>> duration = DurationClassical(2)
        True
        >>> duration = DurationClassical(4)
        True
        >>> duration = DurationClassical(32)
        True
        >>> duration = DurationClassical(1)
        True
        >>> duration = DurationClassical(0)
        False
        >>> duration = DurationClassical(-2)
        False
        >>> duration = DurationClassical(3)
        False
        >>> duration = DurationClassical(7)
        False
    """
    super().__init__(duration)
    if not DurationClassical.__is_valid_duration(duration):
        raise ValueError(f'Bad duration: {duration} was provided.')

    self.duration = int(duration)

__le__(other)

Compare two durations.

Parameters:

Name Type Description Default
other DurationClassical

The other duration to compare

required

Returns:

Type Description
bool

True if this duration is lower or equal than the other, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration <= duration2
True
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(2)
>>> duration <= duration2
False
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(4)
>>> duration <= duration2
True
Source code in kernpy/core/tokens.py
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
def __le__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other (DurationClassical): The other duration to compare

    Returns:
        True if this duration is lower or equal than the other, False otherwise

    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration <= duration2
        True
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(2)
        >>> duration <= duration2
        False
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(4)
        >>> duration <= duration2
        True
    """
    return self.__lt__(other) or self.__eq__(other)

__lt__(other)

Compare two durations.

Parameters:

Name Type Description Default
other DurationClassical

The other duration to compare

required

Returns (bool): True if this duration is lower than the other, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration < duration2
True
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(2)
>>> duration < duration2
False
>>> duration = DurationClassical(4)
>>> duration2 = DurationClassical(4)
>>> duration < duration2
False
Source code in kernpy/core/tokens.py
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
def __lt__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other (DurationClassical): The other duration to compare

    Returns (bool):
        True if this duration is lower than the other, False otherwise

    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration < duration2
        True
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(2)
        >>> duration < duration2
        False
        >>> duration = DurationClassical(4)
        >>> duration2 = DurationClassical(4)
        >>> duration < duration2
        False
    """
    if not isinstance(other, DurationClassical):
        raise ValueError(f'Invalid comparison: < operator can not be used to compare duration with {type(other)}')
    return self.duration < other.duration

__ne__(other)

Compare two durations.

Parameters:

Name Type Description Default
other DurationClassical

The other duration to compare

required

Returns (bool): True if the durations are different, False otherwise

Examples:

>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(2)
>>> duration != duration2
False
>>> duration = DurationClassical(2)
>>> duration2 = DurationClassical(4)
>>> duration != duration2
True
Source code in kernpy/core/tokens.py
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
def __ne__(self, other: 'DurationClassical') -> bool:
    """
    Compare two durations.

    Args:
        other (DurationClassical): The other duration to compare

    Returns (bool):
        True if the durations are different, False otherwise

    Examples:
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(2)
        >>> duration != duration2
        False
        >>> duration = DurationClassical(2)
        >>> duration2 = DurationClassical(4)
        >>> duration != duration2
        True
    """
    return not self.__eq__(other)

modify(ratio)

Modify the duration of a note or a rest of the current object.

Parameters:

Name Type Description Default
ratio int

The factor to modify the duration. The factor must be greater than 0.

required

Returns (DurationClassical): The new duration object with the modified duration.

Examples:

>>> duration = DurationClassical(2)
>>> new_duration = duration.modify(2)
>>> new_duration.duration
4
>>> duration = DurationClassical(2)
>>> new_duration = duration.modify(0)
Traceback (most recent call last):
...
ValueError: Invalid factor provided: 0. The factor must be greater than 0.
>>> duration = DurationClassical(2)
>>> new_duration = duration.modify(-2)
Traceback (most recent call last):
...
ValueError: Invalid factor provided: -2. The factor must be greater than 0.
Source code in kernpy/core/tokens.py
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
def modify(self, ratio: int):
    """
    Modify the duration of a note or a rest of the current object.

    Args:
        ratio (int): The factor to modify the duration. The factor must be greater than 0.

    Returns (DurationClassical): The new duration object with the modified duration.

    Examples:
        >>> duration = DurationClassical(2)
        >>> new_duration = duration.modify(2)
        >>> new_duration.duration
        4
        >>> duration = DurationClassical(2)
        >>> new_duration = duration.modify(0)
        Traceback (most recent call last):
        ...
        ValueError: Invalid factor provided: 0. The factor must be greater than 0.
        >>> duration = DurationClassical(2)
        >>> new_duration = duration.modify(-2)
        Traceback (most recent call last):
        ...
        ValueError: Invalid factor provided: -2. The factor must be greater than 0.
    """
    if not isinstance(ratio, int):
        raise ValueError(f'Invalid factor provided: {ratio}. The factor must be an integer.')
    if ratio <= 0:
        raise ValueError(f'Invalid factor provided: {ratio}. The factor must be greater than 0.')

    return copy.deepcopy(DurationClassical(self.duration * ratio))

DurationMensural

Bases: Duration

Represents the duration in mensural notation of a note or a rest.

Source code in kernpy/core/tokens.py
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
class DurationMensural(Duration):
    """
    Represents the duration in mensural notation of a note or a rest.
    """

    def __init__(self, duration):
        super().__init__(duration)
        self.duration = duration

    def __eq__(self, other):
        raise NotImplementedError()

    def modify(self, ratio: int):
        raise NotImplementedError()

    def __deepcopy__(self, memo=None):
        raise NotImplementedError()

    def __gt__(self, other):
        raise NotImplementedError()

    def __lt__(self, other):
        raise NotImplementedError()

    def __le__(self, other):
        raise NotImplementedError()

    def __str__(self):
        raise NotImplementedError()

    def __ge__(self, other):
        raise NotImplementedError()

    def __ne__(self, other):
        raise NotImplementedError()

DynSpineImporter

Bases: SpineImporter

Source code in kernpy/core/dyn_importer.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
class DynSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()

    def import_token(self, encoding: str) -> Token:
        # TODO: Find out differences between **dyn vs **dynam and change this class. Using the same dor both for now.
        dynam_importer = DynamSpineImporter()
        return dynam_importer.import_token(encoding)

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/dyn_importer.py
12
13
14
15
16
17
18
19
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

DynamSpineImporter

Bases: SpineImporter

Source code in kernpy/core/dynam_spine_importer.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class DynamSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()  # TODO: Create a custom functional listener for DynamSpineImporter

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.DYNAMICS)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if not any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.DYNAMICS)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/dynam_spine_importer.py
11
12
13
14
15
16
17
18
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

EkernTokenizer

Bases: Tokenizer

EkernTokenizer converts a Token into an eKern (Extended **kern) string representation. This format use a '@' separator for the main tokens and a '·' separator for the decorations tokens.

Source code in kernpy/core/tokenizers.py
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
class EkernTokenizer(Tokenizer):
    """
    EkernTokenizer converts a Token into an eKern (Extended **kern) string representation. This format use a '@' separator for the \
    main tokens and a '·' separator for the decorations tokens.
    """

    def __init__(self, *, token_categories: Set['TokenCategory']):
        """
        Create a new EkernTokenizer

        Args:
            token_categories (List[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
        """
        super().__init__(token_categories=token_categories)

    def tokenize(self, token: Token) -> str:
        """
        Tokenize a token into an eKern string representation.
        Args:
            token (Token): Token to be tokenized.

        Returns (str): eKern string representation.

        Examples:
            >>> token.encoding
            '2@.@bb@-·_·L'
            >>> EkernTokenizer().tokenize(token)
            '2@.@bb@-·_·L'

        """
        return token.export(filter_categories=lambda cat: cat in self.token_categories)

__init__(*, token_categories)

Create a new EkernTokenizer

Parameters:

Name Type Description Default
token_categories List[TokenCategory]

List of categories to be tokenized. If None will raise an exception.

required
Source code in kernpy/core/tokenizers.py
120
121
122
123
124
125
126
127
def __init__(self, *, token_categories: Set['TokenCategory']):
    """
    Create a new EkernTokenizer

    Args:
        token_categories (List[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
    """
    super().__init__(token_categories=token_categories)

tokenize(token)

Tokenize a token into an eKern string representation. Args: token (Token): Token to be tokenized.

Returns (str): eKern string representation.

Examples:

>>> token.encoding
'2@.@bb@-·_·L'
>>> EkernTokenizer().tokenize(token)
'2@.@bb@-·_·L'
Source code in kernpy/core/tokenizers.py
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
def tokenize(self, token: Token) -> str:
    """
    Tokenize a token into an eKern string representation.
    Args:
        token (Token): Token to be tokenized.

    Returns (str): eKern string representation.

    Examples:
        >>> token.encoding
        '2@.@bb@-·_·L'
        >>> EkernTokenizer().tokenize(token)
        '2@.@bb@-·_·L'

    """
    return token.export(filter_categories=lambda cat: cat in self.token_categories)

Encoding

Bases: Enum

Options for exporting a kern file.

Example

import kernpy as kp

Load a file

doc, _ = kp.load('path/to/file.krn')

Save the file using the specified encoding

exported_content = kp.dumps(encoding=kp.Encoding.normalizedKern)

Source code in kernpy/core/tokenizers.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
class Encoding(Enum):  # TODO: Eventually, polymorphism will be used to export different types of kern files
    """
    Options for exporting a kern file.

    Example:
        >>> import kernpy as kp
        >>> # Load a file
        >>> doc, _ = kp.load('path/to/file.krn')
        >>>
        >>> # Save the file using the specified encoding
        >>> exported_content = kp.dumps(encoding=kp.Encoding.normalizedKern)
    """
    eKern = 'ekern'
    normalizedKern = 'kern'
    bKern = 'bkern'
    bEkern = 'bekern'

    def prefix(self) -> str:
        """
        Get the prefix of the kern type.

        Returns (str): Prefix of the kern type.
        """
        if self == Encoding.eKern:
            return 'e'
        elif self == Encoding.normalizedKern:
            return ''
        elif self == Encoding.bKern:
            return 'b'
        elif self == Encoding.bEkern:
            return 'be'
        else:
            raise ValueError(f'Unknown kern type: {self}. '
                             f'Supported types are: '
                             f"{'-'.join([kern_type.name for kern_type in Encoding.__members__.values()])}")

prefix()

Get the prefix of the kern type.

Returns (str): Prefix of the kern type.

Source code in kernpy/core/tokenizers.py
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
def prefix(self) -> str:
    """
    Get the prefix of the kern type.

    Returns (str): Prefix of the kern type.
    """
    if self == Encoding.eKern:
        return 'e'
    elif self == Encoding.normalizedKern:
        return ''
    elif self == Encoding.bKern:
        return 'b'
    elif self == Encoding.bEkern:
        return 'be'
    else:
        raise ValueError(f'Unknown kern type: {self}. '
                         f'Supported types are: '
                         f"{'-'.join([kern_type.name for kern_type in Encoding.__members__.values()])}")

ErrorListener

Bases: ConsoleErrorListener

Source code in kernpy/core/error_listener.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
class ErrorListener(ConsoleErrorListener):
    def __init__(self, *, verbose: Optional[bool] = False):
        """
        ErrorListener constructor.
        Args:
            verbose (bool): If True, the error messages will be printed to the console using \
            the `ConsoleErrorListener` interface.
        """
        super().__init__()
        self.errors = []
        self.verbose = verbose

    def syntaxError(self, recognizer, offendingSymbol, line, charPositionInLine, msg, e):
        if self.verbose:
            self.syntaxError(recognizer, offendingSymbol, line, charPositionInLine, msg, e)

        self.errors.append(ParseError(offendingSymbol, charPositionInLine, msg, e))

    def getNumberErrorsFound(self):
        return len(self.errors)

    def __str__(self):
        sb = ""
        for error in self.errors:
            sb += str(error) + "\n"
        return sb

__init__(*, verbose=False)

ErrorListener constructor. Args: verbose (bool): If True, the error messages will be printed to the console using the ConsoleErrorListener interface.

Source code in kernpy/core/error_listener.py
27
28
29
30
31
32
33
34
35
36
def __init__(self, *, verbose: Optional[bool] = False):
    """
    ErrorListener constructor.
    Args:
        verbose (bool): If True, the error messages will be printed to the console using \
        the `ConsoleErrorListener` interface.
    """
    super().__init__()
    self.errors = []
    self.verbose = verbose

ErrorToken

Bases: SimpleToken

Used to wrap tokens that have not been parsed.

Source code in kernpy/core/tokens.py
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
class ErrorToken(SimpleToken):
    """
    Used to wrap tokens that have not been parsed.
    """

    def __init__(
            self,
            encoding: str,
            line: int,
            error: str
    ):
        """
        ErrorToken constructor

        Args:
            encoding (str): The original representation of the token.
            line (int): The line number of the token in the score.
            error (str): The error message thrown by the parser.
        """
        super().__init__(encoding, TokenCategory.ERROR)
        self.error = error
        self.line = line

    def export(self, **kwargs) -> str:
        """
        Exports the error token.

        Returns (str): A string representation of the error token.
        """
        # return ERROR_TOKEN
        return self.encoding  # TODO: add a constant for the error token

    def __str__(self):
        """
        Information about the error token.

        Returns (str) The information about the error token.
        """
        return f'Error token found at line {self.line} with encoding "{self.encoding}". Description: {self.error}'

__init__(encoding, line, error)

ErrorToken constructor

Parameters:

Name Type Description Default
encoding str

The original representation of the token.

required
line int

The line number of the token in the score.

required
error str

The error message thrown by the parser.

required
Source code in kernpy/core/tokens.py
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
def __init__(
        self,
        encoding: str,
        line: int,
        error: str
):
    """
    ErrorToken constructor

    Args:
        encoding (str): The original representation of the token.
        line (int): The line number of the token in the score.
        error (str): The error message thrown by the parser.
    """
    super().__init__(encoding, TokenCategory.ERROR)
    self.error = error
    self.line = line

__str__()

Information about the error token.

Returns (str) The information about the error token.

Source code in kernpy/core/tokens.py
1532
1533
1534
1535
1536
1537
1538
def __str__(self):
    """
    Information about the error token.

    Returns (str) The information about the error token.
    """
    return f'Error token found at line {self.line} with encoding "{self.encoding}". Description: {self.error}'

export(**kwargs)

Exports the error token.

Returns (str): A string representation of the error token.

Source code in kernpy/core/tokens.py
1523
1524
1525
1526
1527
1528
1529
1530
def export(self, **kwargs) -> str:
    """
    Exports the error token.

    Returns (str): A string representation of the error token.
    """
    # return ERROR_TOKEN
    return self.encoding  # TODO: add a constant for the error token

ExportOptions

ExportOptions class.

Store the options to export a **kern file.

Source code in kernpy/core/exporter.py
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
class ExportOptions:
    """
    `ExportOptions` class.

    Store the options to export a **kern file.
    """

    def __init__(
            self,
            spine_types: [] = None,
            token_categories: [] = None,
            from_measure: int = None,
            to_measure: int = None,
            kern_type: Encoding = Encoding.normalizedKern,
            instruments: [] = None,
            show_measure_numbers: bool = False,
            spine_ids: [int] = None
    ):
        """
        Create a new ExportOptions object.

        Args:
            spine_types (Iterable): **kern, **mens, etc...
            token_categories (Iterable): TokenCategory
            from_measure (int): The measure to start exporting. When None, the exporter will start from the beginning of the file. The first measure is 1
            to_measure (int): The measure to end exporting. When None, the exporter will end at the end of the file.
            kern_type (Encoding): The type of the kern file to export.
            instruments (Iterable): The instruments to export. When None, all the instruments will be exported.
            show_measure_numbers (Bool): Show the measure numbers in the exported file.
            spine_ids (Iterable): The ids of the spines to export. When None, all the spines will be exported. Spines ids start from 0 and they are increased by 1.

        Example:
            >>> import kernpy

            Create the importer and read the file
            >>> hi = Importer()
            >>> document = hi.import_file('file.krn')
            >>> exporter = Exporter()

            Export the file with the specified options
            >>> options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
            >>> exported_data = exporter.export_string(document, options)

            Export only the lyrics
            >>> options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LYRICS])
            >>> exported_data = exporter.export_string(document, options)

            Export the comments
            >>> options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LINE_COMMENTS, TokenCategory.FIELD_COMMENTS])
            >>> exported_data = exporter.export_string(document, options)

            Export using the eKern version
            >>> options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES, kern_type=Encoding.eKern)
            >>> exported_data = exporter.export_string(document, options)

        """
        self.spine_types = spine_types if spine_types is not None else deepcopy(HEADERS)
        self.from_measure = from_measure
        self.to_measure = to_measure
        self.token_categories = token_categories if token_categories is not None else [c for c in TokenCategory]
        self.kern_type = kern_type
        self.instruments = instruments
        self.show_measure_numbers = show_measure_numbers
        self.spine_ids = spine_ids  # When exporting, if spine_ids=None all the spines will be exported.

    def __eq__(self, other: 'ExportOptions') -> bool:
        """
        Compare two ExportOptions objects.

        Args:
            other: The other ExportOptions object to compare.

        Returns (bool):
            True if the objects are equal, False otherwise.

        Examples:
            >>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
            >>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
            >>> options1 == options2
            True

            >>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
            >>> options1 == options3
            False
        """
        return self.spine_types == other.spine_types and \
            self.token_categories == other.token_categories and \
            self.from_measure == other.from_measure and \
            self.to_measure == other.to_measure and \
            self.kern_type == other.kern_type and \
            self.instruments == other.instruments and \
            self.show_measure_numbers == other.show_measure_numbers and \
            self.spine_ids == other.spine_ids

    def __ne__(self, other: 'ExportOptions') -> bool:
        """
        Compare two ExportOptions objects.

        Args:
            other (ExportOptions): The other ExportOptions object to compare.

        Returns (bool):
            True if the objects are not equal, False otherwise.

        Examples:
            >>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
            >>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
            >>> options1 != options2
            False

            >>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
            >>> options1 != options3
            True
        """
        return not self.__eq__(other)

    @classmethod
    def default(cls):
        return cls(
            spine_types=deepcopy(HEADERS),
            token_categories=[c for c in TokenCategory],
            from_measure=None,
            to_measure=None,
            kern_type=Encoding.normalizedKern,
            instruments=None,
            show_measure_numbers=False,
            spine_ids=None
        )

__eq__(other)

Compare two ExportOptions objects.

Parameters:

Name Type Description Default
other 'ExportOptions'

The other ExportOptions object to compare.

required

Returns (bool): True if the objects are equal, False otherwise.

Examples:

>>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
>>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
>>> options1 == options2
True
>>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
>>> options1 == options3
False
Source code in kernpy/core/exporter.py
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
def __eq__(self, other: 'ExportOptions') -> bool:
    """
    Compare two ExportOptions objects.

    Args:
        other: The other ExportOptions object to compare.

    Returns (bool):
        True if the objects are equal, False otherwise.

    Examples:
        >>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
        >>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
        >>> options1 == options2
        True

        >>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
        >>> options1 == options3
        False
    """
    return self.spine_types == other.spine_types and \
        self.token_categories == other.token_categories and \
        self.from_measure == other.from_measure and \
        self.to_measure == other.to_measure and \
        self.kern_type == other.kern_type and \
        self.instruments == other.instruments and \
        self.show_measure_numbers == other.show_measure_numbers and \
        self.spine_ids == other.spine_ids

__init__(spine_types=None, token_categories=None, from_measure=None, to_measure=None, kern_type=Encoding.normalizedKern, instruments=None, show_measure_numbers=False, spine_ids=None)

Create a new ExportOptions object.

Parameters:

Name Type Description Default
spine_types Iterable

kern, mens, etc...

None
token_categories Iterable

TokenCategory

None
from_measure int

The measure to start exporting. When None, the exporter will start from the beginning of the file. The first measure is 1

None
to_measure int

The measure to end exporting. When None, the exporter will end at the end of the file.

None
kern_type Encoding

The type of the kern file to export.

normalizedKern
instruments Iterable

The instruments to export. When None, all the instruments will be exported.

None
show_measure_numbers Bool

Show the measure numbers in the exported file.

False
spine_ids Iterable

The ids of the spines to export. When None, all the spines will be exported. Spines ids start from 0 and they are increased by 1.

None
Example

import kernpy

Create the importer and read the file

hi = Importer() document = hi.import_file('file.krn') exporter = Exporter()

Export the file with the specified options

options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES) exported_data = exporter.export_string(document, options)

Export only the lyrics

options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LYRICS]) exported_data = exporter.export_string(document, options)

Export the comments

options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LINE_COMMENTS, TokenCategory.FIELD_COMMENTS]) exported_data = exporter.export_string(document, options)

Export using the eKern version

options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES, kern_type=Encoding.eKern) exported_data = exporter.export_string(document, options)

Source code in kernpy/core/exporter.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
def __init__(
        self,
        spine_types: [] = None,
        token_categories: [] = None,
        from_measure: int = None,
        to_measure: int = None,
        kern_type: Encoding = Encoding.normalizedKern,
        instruments: [] = None,
        show_measure_numbers: bool = False,
        spine_ids: [int] = None
):
    """
    Create a new ExportOptions object.

    Args:
        spine_types (Iterable): **kern, **mens, etc...
        token_categories (Iterable): TokenCategory
        from_measure (int): The measure to start exporting. When None, the exporter will start from the beginning of the file. The first measure is 1
        to_measure (int): The measure to end exporting. When None, the exporter will end at the end of the file.
        kern_type (Encoding): The type of the kern file to export.
        instruments (Iterable): The instruments to export. When None, all the instruments will be exported.
        show_measure_numbers (Bool): Show the measure numbers in the exported file.
        spine_ids (Iterable): The ids of the spines to export. When None, all the spines will be exported. Spines ids start from 0 and they are increased by 1.

    Example:
        >>> import kernpy

        Create the importer and read the file
        >>> hi = Importer()
        >>> document = hi.import_file('file.krn')
        >>> exporter = Exporter()

        Export the file with the specified options
        >>> options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
        >>> exported_data = exporter.export_string(document, options)

        Export only the lyrics
        >>> options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LYRICS])
        >>> exported_data = exporter.export_string(document, options)

        Export the comments
        >>> options = ExportOptions(spine_types=['**kern'], token_categories=[TokenCategory.LINE_COMMENTS, TokenCategory.FIELD_COMMENTS])
        >>> exported_data = exporter.export_string(document, options)

        Export using the eKern version
        >>> options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES, kern_type=Encoding.eKern)
        >>> exported_data = exporter.export_string(document, options)

    """
    self.spine_types = spine_types if spine_types is not None else deepcopy(HEADERS)
    self.from_measure = from_measure
    self.to_measure = to_measure
    self.token_categories = token_categories if token_categories is not None else [c for c in TokenCategory]
    self.kern_type = kern_type
    self.instruments = instruments
    self.show_measure_numbers = show_measure_numbers
    self.spine_ids = spine_ids  # When exporting, if spine_ids=None all the spines will be exported.

__ne__(other)

Compare two ExportOptions objects.

Parameters:

Name Type Description Default
other ExportOptions

The other ExportOptions object to compare.

required

Returns (bool): True if the objects are not equal, False otherwise.

Examples:

>>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
>>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
>>> options1 != options2
False
>>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
>>> options1 != options3
True
Source code in kernpy/core/exporter.py
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
def __ne__(self, other: 'ExportOptions') -> bool:
    """
    Compare two ExportOptions objects.

    Args:
        other (ExportOptions): The other ExportOptions object to compare.

    Returns (bool):
        True if the objects are not equal, False otherwise.

    Examples:
        >>> options1 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
        >>> options2 = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES)
        >>> options1 != options2
        False

        >>> options3 = ExportOptions(spine_types=['**kern', '**harm'], token_categories=BEKERN_CATEGORIES)
        >>> options1 != options3
        True
    """
    return not self.__eq__(other)

Exporter

Source code in kernpy/core/exporter.py
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
class Exporter:
    def export_string(self, document: Document, options: ExportOptions) -> str:
        self.export_options_validator(document, options)

        rows = []

        if options.to_measure is not None and options.to_measure < len(document.measure_start_tree_stages):

            if options.to_measure < len(document.measure_start_tree_stages) - 1:
                to_stage = document.measure_start_tree_stages[
                    options.to_measure]  # take the barlines from the next coming measure
            else:
                to_stage = len(document.tree.stages) - 1  # all stages
        else:
            to_stage = len(document.tree.stages) - 1  # all stages

        if options.from_measure:
            # In case of beginning not from the first measure, we recover the spine creation and the headers
            # Traversed in reverse order to only include the active spines at the given measure...
            from_stage = document.measure_start_tree_stages[options.from_measure - 1]
            next_nodes = document.tree.stages[from_stage]
            while next_nodes and len(next_nodes) > 0 and next_nodes[0] != document.tree.root:
                row = []
                new_next_nodes = []
                non_place_holder_in_row = False
                spine_operation_row = False
                for node in next_nodes:
                    if isinstance(node.token, SpineOperationToken):
                        spine_operation_row = True
                        break

                for node in next_nodes:
                    content = ''
                    if isinstance(node.token, HeaderToken) and node.token.encoding in options.spine_types:
                        content = self.export_token(node.token, options)
                        non_place_holder_in_row = True
                    elif spine_operation_row:
                        # either if it is the split operator that has been cancelled, or the join one
                        if isinstance(node.token, SpineOperationToken) and (node.token.is_cancelled_at(
                                from_stage) or node.last_spine_operator_node and node.last_spine_operator_node.token.cancelled_at_stage == node.stage):
                            content = '*'
                        else:
                            content = self.export_token(node.token, options)
                            non_place_holder_in_row = True
                    if content:
                        row.append(content)
                    new_next_nodes.append(node.parent)
                next_nodes = new_next_nodes
                if non_place_holder_in_row:  # if the row contains just place holders due to an ommitted place holder, don't add it
                    rows.insert(0, row)

            # now, export the signatures
            node_signatures = None
            for node in document.tree.stages[from_stage]:
                node_signature_rows = []
                for signature_node in node.last_signature_nodes.nodes.values():
                    if not self.is_signature_cancelled(signature_node, node, from_stage, to_stage):
                        node_signature_rows.append(self.export_token(signature_node.token, options))
                if len(node_signature_rows) > 0:
                    if not node_signatures:
                        node_signatures = []  # an array for each spine
                    else:
                        if len(node_signatures[0]) != len(node_signature_rows):
                            raise Exception(f'Node signature mismatch: multiple spines with signatures at measure {len(rows)}')  # TODO better message
                    node_signatures.append(node_signature_rows)

            if node_signatures:
                for irow in range(len(node_signatures[0])):  # all spines have the same number of rows
                    row = []
                    for icol in range(len(node_signatures)):  #len(node_signatures) = number of spines
                        row.append(node_signatures[icol][irow])
                    rows.append(row)

        else:
            from_stage = 0
            rows = []

        #if not node.token.category == TokenCategory.LINE_COMMENTS and not node.token.category == TokenCategory.FIELD_COMMENTS:
        for stage in range(from_stage, to_stage + 1):  # to_stage included
            row = []
            for node in document.tree.stages[stage]:
                self.append_row(document=document, node=node, options=options, row=row)

            if len(row) > 0:
                rows.append(row)

        # now, add the spine terminate row
        if options.to_measure is not None and len(rows) > 0 and rows[len(rows) - 1][
            0] != '*-':  # if the terminate is not added yet
            spine_count = len(rows[len(rows) - 1])
            row = []
            for i in range(spine_count):
                row.append('*-')
            rows.append(row)

        result = ""
        for row in rows:
            if not empty_row(row):
                result += '\t'.join(row) + '\n'
        return result

    def compute_header_type(self, node) -> Optional[HeaderToken]:
        """
        Compute the header type of the node.

        Args:
            node (Node): The node to compute.

        Returns (Optional[Token]): The header type `Node`object. None if the current node is the header.

        """
        if isinstance(node.token, HeaderToken):
            header_type = node.token
        elif node.header_node:
            header_type = node.header_node.token
        else:
            header_type = None
        return header_type

    def export_token(self, token: Token, options: ExportOptions) -> str:
        if isinstance(token, HeaderToken):
            new_token = HeaderTokenGenerator.new(token=token, type=options.kern_type)
        else:
            new_token = token
        return (TokenizerFactory
                .create(options.kern_type.value, token_categories=options.token_categories)
                .tokenize(new_token))

    def append_row(self, document: Document, node, options: ExportOptions, row: list) -> bool:
        """
        Append a row to the row list if the node accomplishes the requirements.
        Args:
            document (Document): The document with the spines.
            node (Node): The node to append.
            options (ExportOptions): The export options to filter the token.
            row (list): The row to append.

        Returns (bool): True if the row was appended. False if the row was not appended.
        """
        header_type = self.compute_header_type(node)

        if (header_type is not None
                and header_type.encoding in options.spine_types
                and not node.token.hidden
                and (isinstance(node.token, ComplexToken) or node.token.category in options.token_categories)
                and (options.spine_ids is None or header_type.spine_id in options.spine_ids)
        # If None, all the spines will be exported. TODO: put all the spines as spine_ids = None
        ):
            row.append(self.export_token(node.token, options))
            return True

        return False

    def get_spine_types(self, document: Document, spine_types: list = None):
        """
        Get the spine types from the document.

        Args:
            document (Document): The document with the spines.
            spine_types (list): The spine types to export. If None, all the spine types will be exported.

        Returns: A list with the spine types.

        Examples:
            >>> exporter = Exporter()
            >>> exporter.get_spine_types(document)
            ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
            >>> exporter.get_spine_types(document, None)
            ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
            >>> exporter.get_spine_types(document, ['**kern'])
            ['**kern', '**kern', '**kern', '**kern']
            >>> exporter.get_spine_types(document, ['**kern', '**root'])
            ['**kern', '**kern', '**kern', '**kern', '**root']
            >>> exporter.get_spine_types(document, ['**kern', '**root', '**harm'])
            ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
            >>> exporter.get_spine_types(document, [])
            []
        """
        if spine_types is not None and len(spine_types) == 0:
            return []

        options = ExportOptions(spine_types=spine_types, token_categories=[TokenCategory.HEADER])
        content = self.export_string(document, options)

        # Remove all after the first line: **kern, **mens, etc... are always in the first row
        lines = content.split('\n')
        first_line = lines[0:1]
        tokens = first_line[0].split('\t')

        return tokens if tokens not in [[], ['']] else []


    @classmethod
    def export_options_validator(cls, document: Document, options: ExportOptions) -> None:
        """
        Validate the export options. Raise an exception if the options are invalid.

        Args:
            document: `Document` - The document to export.
            options: `ExportOptions` - The options to export the document.

        Returns: None

        Example:
            >>> export_options_validator(document, options)
            ValueError: option from_measure must be >=0 but -1 was found.
            >>> export_options_validator(document, options2)
            None
        """
        if options.from_measure is not None and options.from_measure < 0:
            raise ValueError(f'option from_measure must be >=0 but {options.from_measure} was found. ')
        if options.to_measure is not None and options.to_measure > len(document.measure_start_tree_stages):
            # "TODO: DAVID, check options.to_measure bounds. len(document.measure_start_tree_stages) or len(document.measure_start_tree_stages) - 1"
            raise ValueError(
                f'option to_measure must be <= {len(document.measure_start_tree_stages)} but {options.to_measure} was found. ')
        if options.to_measure is not None and options.from_measure is not None and options.to_measure < options.from_measure:
            raise ValueError(
                f'option to_measure must be >= from_measure but {options.to_measure} < {options.from_measure} was found. ')

    def is_signature_cancelled(self, signature_node, node, from_stage, to_stage) -> bool:
        if node.token.__class__ == signature_node.token.__class__:
            return True
        elif isinstance(node.token, NoteRestToken):
            return False
        elif from_stage < to_stage:
            for child in node.children:
                if self.is_signature_cancelled(signature_node, child, from_stage + 1, to_stage):
                    return True
            return False

append_row(document, node, options, row)

Append a row to the row list if the node accomplishes the requirements. Args: document (Document): The document with the spines. node (Node): The node to append. options (ExportOptions): The export options to filter the token. row (list): The row to append.

Returns (bool): True if the row was appended. False if the row was not appended.

Source code in kernpy/core/exporter.py
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
def append_row(self, document: Document, node, options: ExportOptions, row: list) -> bool:
    """
    Append a row to the row list if the node accomplishes the requirements.
    Args:
        document (Document): The document with the spines.
        node (Node): The node to append.
        options (ExportOptions): The export options to filter the token.
        row (list): The row to append.

    Returns (bool): True if the row was appended. False if the row was not appended.
    """
    header_type = self.compute_header_type(node)

    if (header_type is not None
            and header_type.encoding in options.spine_types
            and not node.token.hidden
            and (isinstance(node.token, ComplexToken) or node.token.category in options.token_categories)
            and (options.spine_ids is None or header_type.spine_id in options.spine_ids)
    # If None, all the spines will be exported. TODO: put all the spines as spine_ids = None
    ):
        row.append(self.export_token(node.token, options))
        return True

    return False

compute_header_type(node)

Compute the header type of the node.

Parameters:

Name Type Description Default
node Node

The node to compute.

required

Returns (Optional[Token]): The header type Nodeobject. None if the current node is the header.

Source code in kernpy/core/exporter.py
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def compute_header_type(self, node) -> Optional[HeaderToken]:
    """
    Compute the header type of the node.

    Args:
        node (Node): The node to compute.

    Returns (Optional[Token]): The header type `Node`object. None if the current node is the header.

    """
    if isinstance(node.token, HeaderToken):
        header_type = node.token
    elif node.header_node:
        header_type = node.header_node.token
    else:
        header_type = None
    return header_type

export_options_validator(document, options) classmethod

Validate the export options. Raise an exception if the options are invalid.

Parameters:

Name Type Description Default
document Document

Document - The document to export.

required
options ExportOptions

ExportOptions - The options to export the document.

required

Returns: None

Example

export_options_validator(document, options) ValueError: option from_measure must be >=0 but -1 was found. export_options_validator(document, options2) None

Source code in kernpy/core/exporter.py
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
@classmethod
def export_options_validator(cls, document: Document, options: ExportOptions) -> None:
    """
    Validate the export options. Raise an exception if the options are invalid.

    Args:
        document: `Document` - The document to export.
        options: `ExportOptions` - The options to export the document.

    Returns: None

    Example:
        >>> export_options_validator(document, options)
        ValueError: option from_measure must be >=0 but -1 was found.
        >>> export_options_validator(document, options2)
        None
    """
    if options.from_measure is not None and options.from_measure < 0:
        raise ValueError(f'option from_measure must be >=0 but {options.from_measure} was found. ')
    if options.to_measure is not None and options.to_measure > len(document.measure_start_tree_stages):
        # "TODO: DAVID, check options.to_measure bounds. len(document.measure_start_tree_stages) or len(document.measure_start_tree_stages) - 1"
        raise ValueError(
            f'option to_measure must be <= {len(document.measure_start_tree_stages)} but {options.to_measure} was found. ')
    if options.to_measure is not None and options.from_measure is not None and options.to_measure < options.from_measure:
        raise ValueError(
            f'option to_measure must be >= from_measure but {options.to_measure} < {options.from_measure} was found. ')

get_spine_types(document, spine_types=None)

Get the spine types from the document.

Parameters:

Name Type Description Default
document Document

The document with the spines.

required
spine_types list

The spine types to export. If None, all the spine types will be exported.

None

Returns: A list with the spine types.

Examples:

>>> exporter = Exporter()
>>> exporter.get_spine_types(document)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> exporter.get_spine_types(document, None)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> exporter.get_spine_types(document, ['**kern'])
['**kern', '**kern', '**kern', '**kern']
>>> exporter.get_spine_types(document, ['**kern', '**root'])
['**kern', '**kern', '**kern', '**kern', '**root']
>>> exporter.get_spine_types(document, ['**kern', '**root', '**harm'])
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> exporter.get_spine_types(document, [])
[]
Source code in kernpy/core/exporter.py
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
def get_spine_types(self, document: Document, spine_types: list = None):
    """
    Get the spine types from the document.

    Args:
        document (Document): The document with the spines.
        spine_types (list): The spine types to export. If None, all the spine types will be exported.

    Returns: A list with the spine types.

    Examples:
        >>> exporter = Exporter()
        >>> exporter.get_spine_types(document)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> exporter.get_spine_types(document, None)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> exporter.get_spine_types(document, ['**kern'])
        ['**kern', '**kern', '**kern', '**kern']
        >>> exporter.get_spine_types(document, ['**kern', '**root'])
        ['**kern', '**kern', '**kern', '**kern', '**root']
        >>> exporter.get_spine_types(document, ['**kern', '**root', '**harm'])
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> exporter.get_spine_types(document, [])
        []
    """
    if spine_types is not None and len(spine_types) == 0:
        return []

    options = ExportOptions(spine_types=spine_types, token_categories=[TokenCategory.HEADER])
    content = self.export_string(document, options)

    # Remove all after the first line: **kern, **mens, etc... are always in the first row
    lines = content.split('\n')
    first_line = lines[0:1]
    tokens = first_line[0].split('\t')

    return tokens if tokens not in [[], ['']] else []

F3Clef

Bases: Clef

Source code in kernpy/core/gkern.py
365
366
367
368
369
370
371
372
373
374
375
376
class F3Clef(Clef):
    def __init__(self):
        """
        Initializes the F Clef object.
        """
        super().__init__(DiatonicPitch('F'), 3)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('B', 3)

__init__()

Initializes the F Clef object.

Source code in kernpy/core/gkern.py
366
367
368
369
370
def __init__(self):
    """
    Initializes the F Clef object.
    """
    super().__init__(DiatonicPitch('F'), 3)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
372
373
374
375
376
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('B', 3)

F4Clef

Bases: Clef

Source code in kernpy/core/gkern.py
378
379
380
381
382
383
384
385
386
387
388
389
class F4Clef(Clef):
    def __init__(self):
        """
        Initializes the F Clef object.
        """
        super().__init__(DiatonicPitch('F'), 4)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('G', 2)

__init__()

Initializes the F Clef object.

Source code in kernpy/core/gkern.py
379
380
381
382
383
def __init__(self):
    """
    Initializes the F Clef object.
    """
    super().__init__(DiatonicPitch('F'), 4)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
385
386
387
388
389
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('G', 2)

FieldCommentToken

Bases: SimpleToken

FieldCommentToken class stores the metacomments of the score. Usually these are comments starting with !!!.

Source code in kernpy/core/tokens.py
1575
1576
1577
1578
1579
1580
1581
1582
1583
class FieldCommentToken(SimpleToken):
    """
    FieldCommentToken class stores the metacomments of the score.
    Usually these are comments starting with `!!!`.

    """

    def __init__(self, encoding):
        super().__init__(encoding, TokenCategory.FIELD_COMMENTS)

FingSpineImporter

Bases: SpineImporter

Source code in kernpy/core/fing_spine_importer.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
class FingSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()


    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.FINGERING)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if not any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.FINGERING)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/fing_spine_importer.py
11
12
13
14
15
16
17
18
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

GClef

Bases: Clef

Source code in kernpy/core/gkern.py
352
353
354
355
356
357
358
359
360
361
362
363
class GClef(Clef):
    def __init__(self):
        """
        Initializes the G Clef object.
        """
        super().__init__(DiatonicPitch('G'), 2)

    def bottom_line(self) -> AgnosticPitch:
        """
        Returns the pitch of the bottom line of the staff.
        """
        return AgnosticPitch('E', 4)

__init__()

Initializes the G Clef object.

Source code in kernpy/core/gkern.py
353
354
355
356
357
def __init__(self):
    """
    Initializes the G Clef object.
    """
    super().__init__(DiatonicPitch('G'), 2)

bottom_line()

Returns the pitch of the bottom line of the staff.

Source code in kernpy/core/gkern.py
359
360
361
362
363
def bottom_line(self) -> AgnosticPitch:
    """
    Returns the pitch of the bottom line of the staff.
    """
    return AgnosticPitch('E', 4)

GKernExporter

Source code in kernpy/core/gkern.py
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
class GKernExporter:
    def __init__(self, clef: Clef):
        self.clef = clef

    def export(self, staff: Staff, pitch: AgnosticPitch) -> str:
        """
        Exports the given pitch to a graphic **kern encoding.
        """
        position = self.agnostic_position(staff, pitch)
        return f"{GRAPHIC_TOKEN_SEPARATOR}{str(position)}"

    def agnostic_position(self, staff: Staff, pitch: AgnosticPitch) -> PositionInStaff:
        """
        Returns the agnostic position in staff for the given pitch.
        """
        return staff.position_in_staff(clef=self.clef, pitch=pitch)

agnostic_position(staff, pitch)

Returns the agnostic position in staff for the given pitch.

Source code in kernpy/core/gkern.py
519
520
521
522
523
def agnostic_position(self, staff: Staff, pitch: AgnosticPitch) -> PositionInStaff:
    """
    Returns the agnostic position in staff for the given pitch.
    """
    return staff.position_in_staff(clef=self.clef, pitch=pitch)

export(staff, pitch)

Exports the given pitch to a graphic **kern encoding.

Source code in kernpy/core/gkern.py
512
513
514
515
516
517
def export(self, staff: Staff, pitch: AgnosticPitch) -> str:
    """
    Exports the given pitch to a graphic **kern encoding.
    """
    position = self.agnostic_position(staff, pitch)
    return f"{GRAPHIC_TOKEN_SEPARATOR}{str(position)}"

Generic

Generic class.

This class provides support to the public API for KernPy.

The main functions implementation are provided here.

Source code in kernpy/core/generic.py
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
class Generic:
    """
    Generic class.

    This class provides support to the public API for KernPy.

    The main functions implementation are provided here.
    """

    @classmethod
    def read(
            cls,
            path: Path,
            strict: Optional[bool] = False
    ) -> (Document, List[str]):
        """

        Args:
            path:
            strict:

        Returns:

        """
        importer = Importer()
        document = importer.import_file(path)
        errors = importer.errors

        if strict and len(errors) > 0:
            raise Exception(importer.get_error_messages())

        return document, errors

    @classmethod
    def create(
            cls,
            content: str,
            strict: Optional[bool] = False
    ) -> (Document, List[str]):
        """

        Args:
            content:
            strict:

        Returns:

        """
        importer = Importer()
        document = importer.import_string(content)
        errors = importer.errors

        if strict and len(errors) > 0:
            raise Exception(importer.get_error_messages())

        return document, errors

    @classmethod
    def export(
            cls,
            document: Document,
            options: ExportOptions
    ) -> str:
        """

        Args:
            document:
            options:

        Returns:

        """
        exporter = Exporter()
        return exporter.export_string(document, options)

    @classmethod
    def store(
            cls,
            document: Document,
            path: Path,
            options: ExportOptions
    ) -> None:
        """

        Args:
            document:
            path:
            options:

        Returns:
        """
        content = cls.export(document, options)
        _write(path, content)

    @classmethod
    def store_graph(
            cls,
            document: Document,
            path: Path
    ) -> None:
        """

        Args:
            document:
            path:

        Returns:
        """
        graph_exporter = GraphvizExporter()
        graph_exporter.export_to_dot(document.tree, path)

    @classmethod
    def get_spine_types(
            cls,
            document: Document,
            spine_types: Optional[Sequence[str]] = None
    ) -> List[str]:
        """

        Args:
            document:
            spine_types:

        Returns:

        """
        exporter = Exporter()
        return exporter.get_spine_types(document, spine_types)

    @classmethod
    def merge(
            cls,
            contents: Sequence[str],
            strict: Optional[bool] = False
    ) -> Tuple[Document, List[Tuple[int, int]]]:
        """

        Args:
            contents:
            strict:

        Returns:

        """
        if len(contents) < 2:
            raise ValueError(f"Concatenation action requires at least two documents to concatenate."
                             f"But {len(contents)} was given.")

        raise NotImplementedError("The merge function is not implemented yet.")

        doc_a, err_a = cls.create(contents[0], strict=strict)
        for i, content in enumerate(contents[1:]):
            doc_b, err_b = cls.create(content, strict=strict)

            if strict and (len(err_a) > 0 or len(err_b) > 0):
                raise Exception(f"Errors were found during the creation of the documents "
                                f"while using the strict=True option. "
                                f"Description: concatenating: {err_a if len(err_a) > 0 else err_b}")

            doc_a.add(doc_b)
        return cls.export(
            document=doc_a,
            options=options
        )

    @classmethod
    def concat(
            cls,
            contents: Sequence[str],
            separator: Optional[str] = None
    ) -> Tuple[Document, List[Tuple[int, int]]]:
        """

        Args:
            contents:
            separator:

        Returns:

        """
        # Raw kern content
        if separator is None:
            separator = '\n'

        if len(contents) == 0:
            raise ValueError("No contents to merge. At least one content is required.")

        raw_kern = ''
        document = None
        indexes = []
        low_index = 0
        high_index = 0

        # Merge all fragments
        for content in contents:
            raw_kern += separator + content
            document, _ = create(raw_kern)
            high_index = document.measures_count()
            indexes.append((low_index, high_index))

            low_index = high_index + 1  # Next fragment start is the previous fragment end + 1

        if document is None:
            raise Exception("Failed to merge the contents. The document is None.")

        return document, indexes

    @classmethod
    def parse_options_to_ExportOptions(
            cls,
            **kwargs: Any
    ) -> ExportOptions:
        """

        Args:
            **kwargs:

        Returns:

        """
        options = ExportOptions.default()

        # Compute the valid token categories
        options.token_categories = TokenCategoryHierarchyMapper.valid(
            include=kwargs.get('include', None),
            exclude=kwargs.get('exclude', None)
        )

        # Use kwargs to update the ExportOptions object
        for key, value in kwargs.items():
            if key in ['include', 'exclude', 'token_categories']:  # Skip these keys: generated manually
                continue

            if value is not None:
                setattr(options, key, value)

        return options

concat(contents, separator=None) classmethod

Parameters:

Name Type Description Default
contents Sequence[str]
required
separator Optional[str]
None

Returns:

Source code in kernpy/core/generic.py
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
@classmethod
def concat(
        cls,
        contents: Sequence[str],
        separator: Optional[str] = None
) -> Tuple[Document, List[Tuple[int, int]]]:
    """

    Args:
        contents:
        separator:

    Returns:

    """
    # Raw kern content
    if separator is None:
        separator = '\n'

    if len(contents) == 0:
        raise ValueError("No contents to merge. At least one content is required.")

    raw_kern = ''
    document = None
    indexes = []
    low_index = 0
    high_index = 0

    # Merge all fragments
    for content in contents:
        raw_kern += separator + content
        document, _ = create(raw_kern)
        high_index = document.measures_count()
        indexes.append((low_index, high_index))

        low_index = high_index + 1  # Next fragment start is the previous fragment end + 1

    if document is None:
        raise Exception("Failed to merge the contents. The document is None.")

    return document, indexes

create(content, strict=False) classmethod

Parameters:

Name Type Description Default
content str
required
strict Optional[bool]
False

Returns:

Source code in kernpy/core/generic.py
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
@classmethod
def create(
        cls,
        content: str,
        strict: Optional[bool] = False
) -> (Document, List[str]):
    """

    Args:
        content:
        strict:

    Returns:

    """
    importer = Importer()
    document = importer.import_string(content)
    errors = importer.errors

    if strict and len(errors) > 0:
        raise Exception(importer.get_error_messages())

    return document, errors

export(document, options) classmethod

Parameters:

Name Type Description Default
document Document
required
options ExportOptions
required

Returns:

Source code in kernpy/core/generic.py
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
@classmethod
def export(
        cls,
        document: Document,
        options: ExportOptions
) -> str:
    """

    Args:
        document:
        options:

    Returns:

    """
    exporter = Exporter()
    return exporter.export_string(document, options)

get_spine_types(document, spine_types=None) classmethod

Parameters:

Name Type Description Default
document Document
required
spine_types Optional[Sequence[str]]
None

Returns:

Source code in kernpy/core/generic.py
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
@classmethod
def get_spine_types(
        cls,
        document: Document,
        spine_types: Optional[Sequence[str]] = None
) -> List[str]:
    """

    Args:
        document:
        spine_types:

    Returns:

    """
    exporter = Exporter()
    return exporter.get_spine_types(document, spine_types)

merge(contents, strict=False) classmethod

Parameters:

Name Type Description Default
contents Sequence[str]
required
strict Optional[bool]
False

Returns:

Source code in kernpy/core/generic.py
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
@classmethod
def merge(
        cls,
        contents: Sequence[str],
        strict: Optional[bool] = False
) -> Tuple[Document, List[Tuple[int, int]]]:
    """

    Args:
        contents:
        strict:

    Returns:

    """
    if len(contents) < 2:
        raise ValueError(f"Concatenation action requires at least two documents to concatenate."
                         f"But {len(contents)} was given.")

    raise NotImplementedError("The merge function is not implemented yet.")

    doc_a, err_a = cls.create(contents[0], strict=strict)
    for i, content in enumerate(contents[1:]):
        doc_b, err_b = cls.create(content, strict=strict)

        if strict and (len(err_a) > 0 or len(err_b) > 0):
            raise Exception(f"Errors were found during the creation of the documents "
                            f"while using the strict=True option. "
                            f"Description: concatenating: {err_a if len(err_a) > 0 else err_b}")

        doc_a.add(doc_b)
    return cls.export(
        document=doc_a,
        options=options
    )

parse_options_to_ExportOptions(**kwargs) classmethod

Parameters:

Name Type Description Default
**kwargs Any
{}

Returns:

Source code in kernpy/core/generic.py
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
@classmethod
def parse_options_to_ExportOptions(
        cls,
        **kwargs: Any
) -> ExportOptions:
    """

    Args:
        **kwargs:

    Returns:

    """
    options = ExportOptions.default()

    # Compute the valid token categories
    options.token_categories = TokenCategoryHierarchyMapper.valid(
        include=kwargs.get('include', None),
        exclude=kwargs.get('exclude', None)
    )

    # Use kwargs to update the ExportOptions object
    for key, value in kwargs.items():
        if key in ['include', 'exclude', 'token_categories']:  # Skip these keys: generated manually
            continue

        if value is not None:
            setattr(options, key, value)

    return options

read(path, strict=False) classmethod

Parameters:

Name Type Description Default
path Path
required
strict Optional[bool]
False

Returns:

Source code in kernpy/core/generic.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
@classmethod
def read(
        cls,
        path: Path,
        strict: Optional[bool] = False
) -> (Document, List[str]):
    """

    Args:
        path:
        strict:

    Returns:

    """
    importer = Importer()
    document = importer.import_file(path)
    errors = importer.errors

    if strict and len(errors) > 0:
        raise Exception(importer.get_error_messages())

    return document, errors

store(document, path, options) classmethod

Parameters:

Name Type Description Default
document Document
required
path Path
required
options ExportOptions
required

Returns:

Source code in kernpy/core/generic.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
@classmethod
def store(
        cls,
        document: Document,
        path: Path,
        options: ExportOptions
) -> None:
    """

    Args:
        document:
        path:
        options:

    Returns:
    """
    content = cls.export(document, options)
    _write(path, content)

store_graph(document, path) classmethod

Parameters:

Name Type Description Default
document Document
required
path Path
required

Returns:

Source code in kernpy/core/generic.py
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
@classmethod
def store_graph(
        cls,
        document: Document,
        path: Path
) -> None:
    """

    Args:
        document:
        path:

    Returns:
    """
    graph_exporter = GraphvizExporter()
    graph_exporter.export_to_dot(document.tree, path)

GraphvizExporter

Source code in kernpy/core/graphviz_exporter.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
class GraphvizExporter:
    def export_token(self, token: Token):
        if token is None or token.encoding is None:
            return ''
        else:
            return token.encoding.replace('\"', '\\"').replace('\\', '\\\\')

    @staticmethod
    def node_id(node: Node):
        return f"node{id(node)}"

    def export_to_dot(self, tree: MultistageTree, filename: Path = None):
        """
        Export the given MultistageTree to DOT format.

        Args:
            tree (MultistageTree): The tree to export.
            filename (Path or None): The output file path. If None, prints to stdout.
        """
        file = sys.stdout if filename is None else open(filename, 'w')

        try:
            file.write('digraph G {\n')
            file.write('    node [shape=record];\n')
            file.write('    rankdir=TB;\n')  # Ensure top-to-bottom layout

            # Create subgraphs for each stage
            for stage_index, stage in enumerate(tree.stages):
                if stage:
                    file.write('  {rank=same; ')
                    for node in stage:
                        file.write(f'"{self.node_id(node)}"; ')
                    file.write('}\n')

            # Write nodes and their connections
            self._write_nodes_iterative(tree.root, file)
            self._write_edges_iterative(tree.root, file)

            file.write('}\n')

        finally:
            if filename is not None:
                file.close()  # Close only if we explicitly opened a file

    def _write_nodes_iterative(self, root, file):
        stack = [root]

        while stack:
            node = stack.pop()
            header_label = f'header #{node.header_node.id}' if node.header_node else ''
            last_spine_operator_label = f'last spine op. #{node.last_spine_operator_node.id}' if node.last_spine_operator_node else ''
            category_name = getattr(getattr(getattr(node, "token", None), "category", None), "_name_", "Non defined category")


            top_record_label = f'{{ #{node.id}| stage {node.stage} | {header_label} | {last_spine_operator_label} | {category_name} }}'
            signatures_label = ''
            if node.last_signature_nodes and node.last_signature_nodes.nodes:
                for k, v in node.last_signature_nodes.nodes.items():
                    if signatures_label:
                        signatures_label += '|'
                    signatures_label += f'{k} #{v.id}'

            if isinstance(node.token, SpineOperationToken) and node.token.cancelled_at_stage:
                signatures_label += f'| {{ cancelled at stage {node.token.cancelled_at_stage} }}'

            file.write(f'  "{self.node_id(node)}" [label="{{ {top_record_label} | {signatures_label} | {self.export_token(node.token)} }}"];\n')

            # Add children to the stack to be processed
            for child in reversed(node.children):
                stack.append(child)

    def _write_edges_iterative(self, root, file):
        stack = [root]

        while stack:
            node = stack.pop()
            for child in node.children:
                file.write(f'  "{self.node_id(node)}" -> "{self.node_id(child)}";\n')
                stack.append(child)

export_to_dot(tree, filename=None)

Export the given MultistageTree to DOT format.

Parameters:

Name Type Description Default
tree MultistageTree

The tree to export.

required
filename Path or None

The output file path. If None, prints to stdout.

None
Source code in kernpy/core/graphviz_exporter.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
def export_to_dot(self, tree: MultistageTree, filename: Path = None):
    """
    Export the given MultistageTree to DOT format.

    Args:
        tree (MultistageTree): The tree to export.
        filename (Path or None): The output file path. If None, prints to stdout.
    """
    file = sys.stdout if filename is None else open(filename, 'w')

    try:
        file.write('digraph G {\n')
        file.write('    node [shape=record];\n')
        file.write('    rankdir=TB;\n')  # Ensure top-to-bottom layout

        # Create subgraphs for each stage
        for stage_index, stage in enumerate(tree.stages):
            if stage:
                file.write('  {rank=same; ')
                for node in stage:
                    file.write(f'"{self.node_id(node)}"; ')
                file.write('}\n')

        # Write nodes and their connections
        self._write_nodes_iterative(tree.root, file)
        self._write_edges_iterative(tree.root, file)

        file.write('}\n')

    finally:
        if filename is not None:
            file.close()  # Close only if we explicitly opened a file

HarmSpineImporter

Bases: SpineImporter

Source code in kernpy/core/harm_spine_importer.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class HarmSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.HARMONY)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if not any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.HARMONY)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/harm_spine_importer.py
11
12
13
14
15
16
17
18
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

HeaderToken

Bases: SimpleToken

HeaderTokens class.

Source code in kernpy/core/tokens.py
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
class HeaderToken(SimpleToken):
    """
    HeaderTokens class.
    """

    def __init__(self, encoding, spine_id: int):
        """
        Constructor for the HeaderToken class.

        Args:
            encoding (str): The original representation of the token.
            spine_id (int): The spine id of the token. The spine id is used to identify the token in the score.\
                The spine_id starts from 0 and increases by 1 for each new spine like the following example:
                **kern  **kern  **kern **dyn **text
                0   1   2   3   4
        """
        super().__init__(encoding, TokenCategory.HEADER)
        self.spine_id = spine_id

    def export(self, **kwargs) -> str:
        return self.encoding

__init__(encoding, spine_id)

Constructor for the HeaderToken class.

Parameters:

Name Type Description Default
encoding str

The original representation of the token.

required
spine_id int

The spine id of the token. The spine id is used to identify the token in the score. The spine_id starts from 0 and increases by 1 for each new spine like the following example: kern kern kern dyn **text 0 1 2 3 4

required
Source code in kernpy/core/tokens.py
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
def __init__(self, encoding, spine_id: int):
    """
    Constructor for the HeaderToken class.

    Args:
        encoding (str): The original representation of the token.
        spine_id (int): The spine id of the token. The spine id is used to identify the token in the score.\
            The spine_id starts from 0 and increases by 1 for each new spine like the following example:
            **kern  **kern  **kern **dyn **text
            0   1   2   3   4
    """
    super().__init__(encoding, TokenCategory.HEADER)
    self.spine_id = spine_id

HeaderTokenGenerator

HeaderTokenGenerator class.

This class is used to translate the HeaderTokens to the specific encoding format.

Source code in kernpy/core/exporter.py
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
class HeaderTokenGenerator:
    """
    HeaderTokenGenerator class.

    This class is used to translate the HeaderTokens to the specific encoding format.
    """
    @classmethod
    def new(cls, *, token: HeaderToken, type: Encoding):
        """
        Create a new HeaderTokenGenerator object. Only accepts stardized Humdrum **kern encodings. 

        Args:
            token (HeaderToken): The HeaderToken to be translated.
            type (Encoding): The encoding to be used.

        Examples:
            >>> header = HeaderToken('**kern', 0)
            >>> header.encoding
            '**kern'
            >>> new_header = HeaderTokenGenerator.new(token=header, type=Encoding.eKern)
            >>> new_header.encoding
            '**ekern'
        """
        new_encoding = f'**{type.prefix()}{token.encoding[2:]}'
        new_token = HeaderToken(new_encoding, token.spine_id)

        return new_token

new(*, token, type) classmethod

Create a new HeaderTokenGenerator object. Only accepts stardized Humdrum **kern encodings.

Parameters:

Name Type Description Default
token HeaderToken

The HeaderToken to be translated.

required
type Encoding

The encoding to be used.

required

Examples:

>>> header = HeaderToken('**kern', 0)
>>> header.encoding
'**kern'
>>> new_header = HeaderTokenGenerator.new(token=header, type=Encoding.eKern)
>>> new_header.encoding
'**ekern'
Source code in kernpy/core/exporter.py
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
@classmethod
def new(cls, *, token: HeaderToken, type: Encoding):
    """
    Create a new HeaderTokenGenerator object. Only accepts stardized Humdrum **kern encodings. 

    Args:
        token (HeaderToken): The HeaderToken to be translated.
        type (Encoding): The encoding to be used.

    Examples:
        >>> header = HeaderToken('**kern', 0)
        >>> header.encoding
        '**kern'
        >>> new_header = HeaderTokenGenerator.new(token=header, type=Encoding.eKern)
        >>> new_header.encoding
        '**ekern'
    """
    new_encoding = f'**{type.prefix()}{token.encoding[2:]}'
    new_token = HeaderToken(new_encoding, token.spine_id)

    return new_token

HumdrumPitchImporter

Bases: PitchImporter

Represents the pitch in the Humdrum Kern format.

The name is represented using the International Standard Organization (ISO) name notation. The first line below the staff is the C4 in G clef. The above C is C5, the below C is C3, etc.

The Humdrum Kern format uses the following name representation: 'c' = C4 'cc' = C5 'ccc' = C6 'cccc' = C7

'C' = C3 'CC' = C2 'CCC' = C1

This class do not limit the name ranges.

In the following example, the name is represented by the letter 'c'. The name of 'c' is C4, 'cc' is C5, 'ccc' is C6.

**kern
*clefG2
2c          // C4
2cc         // C5
2ccc        // C6
2C          // C3
2CC         // C2
2CCC        // C1
*-
Source code in kernpy/core/pitch_models.py
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
class HumdrumPitchImporter(PitchImporter):
    """
    Represents the pitch in the Humdrum Kern format.

    The name is represented using the International Standard Organization (ISO) name notation.
    The first line below the staff is the C4 in G clef. The above C is C5, the below C is C3, etc.

    The Humdrum Kern format uses the following name representation:
    'c' = C4
    'cc' = C5
    'ccc' = C6
    'cccc' = C7

    'C' = C3
    'CC' = C2
    'CCC' = C1

    This class do not limit the name ranges.

    In the following example, the name is represented by the letter 'c'. The name of 'c' is C4, 'cc' is C5, 'ccc' is C6.
    ```
    **kern
    *clefG2
    2c          // C4
    2cc         // C5
    2ccc        // C6
    2C          // C3
    2CC         // C2
    2CCC        // C1
    *-
    ```
    """
    C4_PITCH_LOWERCASE = 'c'
    C4_OCATAVE = 4
    C3_PITCH_UPPERCASE = 'C'
    C3_OCATAVE = 3
    VALID_PITCHES = 'abcdefg' + 'ABCDEFG'

    def __init__(self):
        super().__init__()

    def import_pitch(self, encoding: str) -> AgnosticPitch:
        self.name, self.octave = self._parse_pitch(encoding)
        return AgnosticPitch(self.name, self.octave)

    def _parse_pitch(self, encoding: str) -> tuple:
        accidentals = ''.join([c for c in encoding if c in ['#', '-']])
        accidentals = accidentals.replace('#', '+')
        encoding = encoding.replace('#', '').replace('-', '')
        pitch = encoding[0].lower()
        octave = None
        if encoding[0].islower():
            min_octave = HumdrumPitchImporter.C4_OCATAVE
            octave = min_octave + (len(encoding) - 1)
        elif encoding[0].isupper():
            max_octave = HumdrumPitchImporter.C3_OCATAVE
            octave = max_octave - (len(encoding) - 1)
        name = f"{pitch}{accidentals}"
        return name, octave

Importer

Importer class.

Use this class to import the content from a file or a string to a Document object.

Source code in kernpy/core/importer.py
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
class Importer:
    """
    Importer class.

    Use this class to import the content from a file or a string to a `Document` object.
    """
    def __init__(self):
        """
        Create an instance of the importer.

        Raises:
            Exception: If the importer content is not a valid **kern file.

        Examples:
            # Create the importer
            >>> importer = Importer()

            # Import the content from a file
            >>> document = importer.import_file('file.krn')

            # Import the content from a string
            >>> document = importer.import_string("**kern\n*clefF4\nc4\n4d\n4e\n4f\n*-")
        """
        self.last_measure_number = None
        self.last_bounding_box = None
        self.errors = []

        self._tree = MultistageTree()
        self._document = Document(self._tree)
        self._importers = {}
        self._header_row_number = None
        self._row_number = 1
        self._tree_stage = 0
        self._next_stage_parents = None
        self._prev_stage_parents = None
        self._last_node_previous_to_header = self._tree.root

    @staticmethod
    def get_last_spine_operator(parent):
        if parent is None:
            return None
        elif isinstance(parent.token, SpineOperationToken):
            return parent
        else:
            return parent.last_spine_operator_node

    #TODO Documentar cómo propagamos los header_node y last_spine_operator_node...
    def run(self, reader) -> Document:
        for row in reader:
            if len(row) <= 0:
                # Found an empty row, usually the last one. Ignore it.
                continue

            self._tree_stage = self._tree_stage + 1
            is_barline = False
            if self._next_stage_parents:
                self._prev_stage_parents = copy(self._next_stage_parents)
            self._next_stage_parents = []

            if row[0].startswith("!!"):
                self._compute_metacomment_token(row[0].strip())
            else:
                for icolumn, column in enumerate(row):
                    if column.startswith("**"):
                        self._compute_header_token(icolumn, column)
                        # go to next row
                        continue

                    if column in SPINE_OPERATIONS:
                        self._compute_spine_operator_token(icolumn, column, row)
                    else:  # column is not a spine operation
                        if column.startswith("!"):
                            token = FieldCommentToken(column)
                        else:
                            if self._prev_stage_parents is None:
                                raise ValueError(f'Any spine header found in the column #{icolumn}. '
                                                 f'Expected a previous line with valid content. '
                                                 f'The token in column #{icolumn} and row #{self._row_number - 1}'
                                                 f' was not created correctly. Error detected in '
                                                 f'column #{icolumn} in row #{self._row_number}. '
                                                 f'Found {column}. ')
                            if icolumn >= len(self._prev_stage_parents):
                                # TODO: Try to fix the kern in runtime. Add options to public API
                                # continue  # ignore the column
                                raise ValueError(f'Wrong columns number in row {self._row_number}. '
                                                 f'The token in column #{icolumn} and row #{self._row_number}'
                                                 f' has more columns than expected in its row. '
                                                 f'Expected {len(self._prev_stage_parents)} columns '
                                                 f'but found {len(row)}.')
                            parent = self._prev_stage_parents[icolumn]
                            if not parent:
                                raise Exception(f'Cannot find a parent node for column #{icolumn} in row {self._row_number}')
                            if not parent.header_node:
                                raise Exception(f'Cannot find a header node for column #{icolumn} in row {self._row_number}')
                            importer = self._importers.get(parent.header_node.token.encoding)
                            if not importer:
                                raise Exception(f'Cannot find an importer for header {parent.header_node.token.encoding}')
                            try:
                                token = importer.import_token(column)
                            except Exception as error:
                                token = ErrorToken(column, self._row_number, str(error))
                                self.errors.append(token)
                        if not token:
                            raise Exception(
                                f'No token generated for input {column} in row number #{self._row_number} using importer {importer}')

                        parent = self._prev_stage_parents[icolumn]
                        node = self._tree.add_node(self._tree_stage, parent, token, self.get_last_spine_operator(parent), parent.last_signature_nodes, parent.header_node)
                        self._next_stage_parents.append(node)

                        if (token.category == TokenCategory.BARLINES
                                or TokenCategory.is_child(child=token.category, parent=TokenCategory.CORE)
                                    and len(self._document.measure_start_tree_stages) == 0):
                            is_barline = True
                        elif isinstance(token, BoundingBoxToken):
                            self.handle_bounding_box(self._document, token)
                        elif isinstance(token, SignatureToken):
                            node.last_signature_nodes.update(node)

                if is_barline:
                    self._document.measure_start_tree_stages.append(self._tree_stage)
                    self.last_measure_number = len(self._document.measure_start_tree_stages)
                    if self.last_bounding_box:
                        self.last_bounding_box.to_measure = self.last_measure_number
            self._row_number = self._row_number + 1
        return self._document

    def handle_bounding_box(self, document: Document, token: BoundingBoxToken):
        page_number = token.page_number
        last_page_bb = document.page_bounding_boxes.get(page_number)
        if last_page_bb is None:
            if self.last_measure_number is None:
                self.last_measure_number = 0
            self.last_bounding_box = BoundingBoxMeasures(token.bounding_box, self.last_measure_number,
                                                         self.last_measure_number)
            document.page_bounding_boxes[page_number] = self.last_bounding_box
        else:
            last_page_bb.bounding_box.extend(token.bounding_box)
            last_page_bb.to_measure = self.last_measure_number

    def import_file(self, file_path: Path) -> Document:
        """
        Import the content from the importer to the file.
        Args:
            file_path: The path to the file.

        Returns:
            Document - The document with the imported content.

        Examples:
            # Create the importer and read the file
            >>> importer = Importer()
            >>> importer.import_file('file.krn')
        """
        with open(file_path, 'r', newline='', encoding='utf-8', errors='ignore') as file:
            reader = csv.reader(file, delimiter='\t')
            return self.run(reader)

    def import_string(self, text: str) -> Document:
        """
        Import the content from the content of the score in string format.

        Args:
            text: The content of the score in string format.

        Returns:
            Document - The document with the imported content.

        Examples:
            # Create the importer and read the file
            >>> importer = Importer()
            >>> importer.import_string("**kern\n*clefF4\nc4\n4d\n4e\n4f\n*-")
            # Read the content from a file
            >>> with open('file.krn',  'r', newline='', encoding='utf-8', errors='ignore') as f: # We encourage you to use these open file options
            >>>     content = f.read()
            >>> importer.import_string(content)
            >>> document = importer.import_string(content)
        """
        lines = text.splitlines()
        reader = csv.reader(lines, delimiter='\t')
        return self.run(reader)

    def get_error_messages(self) -> str:
        """
        Get the error messages of the importer.

        Returns: str - The error messages split by a new line character.

        Examples:
            # Create the importer and read the file
            >>> importer = Importer()
            >>> importer.import_file(Path('file.krn'))
            >>> print(importer.get_error_messages())
            'Error: Invalid token in row 1'
        """
        result = ''
        for err in self.errors:
            result += str(err)
            result += '\n'
        return result

    def has_errors(self) -> bool:
        """
        Check if the importer has any errors.

        Returns: bool - True if the importer has errors, False otherwise.

        Examples:
            # Create the importer and read the file
            >>> importer = Importer()
            >>> importer.import_file(Path('file.krn'))    # file.krn has an error
            >>> print(importer.has_errors())
            True
            >>> importer.import_file(Path('file2.krn'))   # file2.krn has no errors
            >>> print(importer.has_errors())
            False
        """
        return len(self.errors) > 0

    def _compute_metacomment_token(self, raw_token: str):
        token = MetacommentToken(raw_token)
        if self._header_row_number is None:
            node = self._tree.add_node(self._tree_stage, self._last_node_previous_to_header, token, None, None, None)
            self._last_node_previous_to_header = node
        else:
            for parent in self._prev_stage_parents:
                node = self._tree.add_node(self._tree_stage, parent, token, self.get_last_spine_operator(parent), parent.last_signature_nodes, parent.header_node) # the same reference for all spines - TODO Recordar documentarlo
                self._next_stage_parents.append(node)

    def _compute_header_token(self, column_index: int, column_content: str):
        if self._header_row_number is not None and self._header_row_number != self._row_number:
            raise Exception(
                f"Several header rows not supported, there is a header row in #{self._header_row_number} and another in #{self._row_number} ")

            # it's a spine header
        self._document.header_stage = self._tree_stage
        importer = self._importers.get(column_content)
        if not importer:
            importer = createImporter(column_content)
            self._importers[column_content] = importer

        token = HeaderToken(column_content, spine_id=column_index)
        node = self._tree.add_node(self._tree_stage, self._last_node_previous_to_header, token, None, None)
        node.header_node = node # this value will be propagated
        self._next_stage_parents.append(node)

    def _compute_spine_operator_token(self, column_index: int, column_content: str, row: List[str]):
        token = SpineOperationToken(column_content)

        if column_index >= len(self._prev_stage_parents):
            raise Exception(f'Expected at least {column_index+1} parents in row {self._row_number}, but found {len(self._prev_stage_parents)}: {row}')

        parent = self._prev_stage_parents[column_index]
        node = self._tree.add_node(self._tree_stage, parent, token, self.get_last_spine_operator(parent), parent.last_signature_nodes, parent.header_node)

        if column_content == '*-':
            if node.last_spine_operator_node is not None:
                node.last_spine_operator_node.token.cancelled_at_stage = self._tree_stage
            pass # it's terminated, no continuation
        elif column_content == "*+" or column_content == "*^":
            self._next_stage_parents.append(node)
            self._next_stage_parents.append(node) # twice, the next stage two children will have this one as parent
        elif column_content == "*v":
            if node.last_spine_operator_node is not None:
                node.last_spine_operator_node.token.cancelled_at_stage = self._tree_stage

            if column_index == 0 or row[column_index-1] != '*v' or self._prev_stage_parents[column_index-1].header_node != self._prev_stage_parents[column_index].header_node: # don't collapse two different spines
                self._next_stage_parents.append(node) # just one spine each two
        else:
            raise Exception(f'Unknown spine operation in column #{column_content} and row #{self._row_number}')

__init__()

    Create an instance of the importer.

    Raises:
        Exception: If the importer content is not a valid **kern file.

    Examples:
        # Create the importer
        >>> importer = Importer()

        # Import the content from a file
        >>> document = importer.import_file('file.krn')

        # Import the content from a string
        >>> document = importer.import_string("**kern

clefF4 c4 4d 4e 4f -")

Source code in kernpy/core/importer.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def __init__(self):
    """
    Create an instance of the importer.

    Raises:
        Exception: If the importer content is not a valid **kern file.

    Examples:
        # Create the importer
        >>> importer = Importer()

        # Import the content from a file
        >>> document = importer.import_file('file.krn')

        # Import the content from a string
        >>> document = importer.import_string("**kern\n*clefF4\nc4\n4d\n4e\n4f\n*-")
    """
    self.last_measure_number = None
    self.last_bounding_box = None
    self.errors = []

    self._tree = MultistageTree()
    self._document = Document(self._tree)
    self._importers = {}
    self._header_row_number = None
    self._row_number = 1
    self._tree_stage = 0
    self._next_stage_parents = None
    self._prev_stage_parents = None
    self._last_node_previous_to_header = self._tree.root

get_error_messages()

Get the error messages of the importer.

Returns: str - The error messages split by a new line character.

Examples:

Create the importer and read the file

>>> importer = Importer()
>>> importer.import_file(Path('file.krn'))
>>> print(importer.get_error_messages())
'Error: Invalid token in row 1'
Source code in kernpy/core/importer.py
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
def get_error_messages(self) -> str:
    """
    Get the error messages of the importer.

    Returns: str - The error messages split by a new line character.

    Examples:
        # Create the importer and read the file
        >>> importer = Importer()
        >>> importer.import_file(Path('file.krn'))
        >>> print(importer.get_error_messages())
        'Error: Invalid token in row 1'
    """
    result = ''
    for err in self.errors:
        result += str(err)
        result += '\n'
    return result

has_errors()

Check if the importer has any errors.

Returns: bool - True if the importer has errors, False otherwise.

Examples:

Create the importer and read the file

>>> importer = Importer()
>>> importer.import_file(Path('file.krn'))    # file.krn has an error
>>> print(importer.has_errors())
True
>>> importer.import_file(Path('file2.krn'))   # file2.krn has no errors
>>> print(importer.has_errors())
False
Source code in kernpy/core/importer.py
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
def has_errors(self) -> bool:
    """
    Check if the importer has any errors.

    Returns: bool - True if the importer has errors, False otherwise.

    Examples:
        # Create the importer and read the file
        >>> importer = Importer()
        >>> importer.import_file(Path('file.krn'))    # file.krn has an error
        >>> print(importer.has_errors())
        True
        >>> importer.import_file(Path('file2.krn'))   # file2.krn has no errors
        >>> print(importer.has_errors())
        False
    """
    return len(self.errors) > 0

import_file(file_path)

Import the content from the importer to the file. Args: file_path: The path to the file.

Returns:

Type Description
Document

Document - The document with the imported content.

Examples:

Create the importer and read the file

>>> importer = Importer()
>>> importer.import_file('file.krn')
Source code in kernpy/core/importer.py
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
def import_file(self, file_path: Path) -> Document:
    """
    Import the content from the importer to the file.
    Args:
        file_path: The path to the file.

    Returns:
        Document - The document with the imported content.

    Examples:
        # Create the importer and read the file
        >>> importer = Importer()
        >>> importer.import_file('file.krn')
    """
    with open(file_path, 'r', newline='', encoding='utf-8', errors='ignore') as file:
        reader = csv.reader(file, delimiter='\t')
        return self.run(reader)

import_string(text)

    Import the content from the content of the score in string format.

    Args:
        text: The content of the score in string format.

    Returns:
        Document - The document with the imported content.

    Examples:
        # Create the importer and read the file
        >>> importer = Importer()
        >>> importer.import_string("**kern

clefF4 c4 4d 4e 4f -") # Read the content from a file >>> with open('file.krn', 'r', newline='', encoding='utf-8', errors='ignore') as f: # We encourage you to use these open file options >>> content = f.read() >>> importer.import_string(content) >>> document = importer.import_string(content)

Source code in kernpy/core/importer.py
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
def import_string(self, text: str) -> Document:
    """
    Import the content from the content of the score in string format.

    Args:
        text: The content of the score in string format.

    Returns:
        Document - The document with the imported content.

    Examples:
        # Create the importer and read the file
        >>> importer = Importer()
        >>> importer.import_string("**kern\n*clefF4\nc4\n4d\n4e\n4f\n*-")
        # Read the content from a file
        >>> with open('file.krn',  'r', newline='', encoding='utf-8', errors='ignore') as f: # We encourage you to use these open file options
        >>>     content = f.read()
        >>> importer.import_string(content)
        >>> document = importer.import_string(content)
    """
    lines = text.splitlines()
    reader = csv.reader(lines, delimiter='\t')
    return self.run(reader)

InstrumentToken

Bases: SimpleToken

InstrumentToken class stores the instruments of the score.

These tokens usually look like *I"Organo.

Source code in kernpy/core/tokens.py
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
class InstrumentToken(SimpleToken):
    """
    InstrumentToken class stores the instruments of the score.

    These tokens usually look like `*I"Organo`.
    """

    def __init__(self, encoding: str):
        """
        Constructor for the InstrumentToken

        Args:
            encoding:
        """
        super().__init__(encoding, TokenCategory.INSTRUMENTS)

__init__(encoding)

Constructor for the InstrumentToken

Parameters:

Name Type Description Default
encoding str
required
Source code in kernpy/core/tokens.py
1565
1566
1567
1568
1569
1570
1571
1572
def __init__(self, encoding: str):
    """
    Constructor for the InstrumentToken

    Args:
        encoding:
    """
    super().__init__(encoding, TokenCategory.INSTRUMENTS)

KernSpineImporter

Bases: SpineImporter

Source code in kernpy/core/kern_spine_importer.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
class KernSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()

    def import_token(self, encoding: str):
        self._raise_error_if_wrong_input(encoding)

        # self.listenerImporter = KernListenerImporter(token) # TODO ¿Por qué no va esto?
        # self.listenerImporter.start()
        lexer = kernSpineLexer(InputStream(encoding))
        lexer.removeErrorListeners()
        lexer.addErrorListener(self.error_listener)
        stream = CommonTokenStream(lexer)
        parser = kernSpineParser(stream)
        parser._interp.predictionMode = PredictionMode.SLL  # it improves a lot the parsing
        parser.removeErrorListeners()
        parser.addErrorListener(self.error_listener)
        parser.errHandler = BailErrorStrategy()
        tree = parser.start()
        walker = ParseTreeWalker()
        listener = KernSpineListener()
        walker.walk(listener, tree)
        if self.error_listener.getNumberErrorsFound() > 0:
            raise Exception(self.error_listener.errors)
        return listener.token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/kern_spine_importer.py
41
42
43
44
45
46
47
48
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

KernTokenizer

Bases: Tokenizer

KernTokenizer converts a Token into a normalized kern string representation.

Source code in kernpy/core/tokenizers.py
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
class KernTokenizer(Tokenizer):
    """
    KernTokenizer converts a Token into a normalized kern string representation.
    """
    def __init__(self, *, token_categories: Set['TokenCategory']):
        """
        Create a new KernTokenizer.

        Args:
            token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
        """
        super().__init__(token_categories=token_categories)

    def tokenize(self, token: Token) -> str:
        """
        Tokenize a token into a normalized kern string representation.
        This format is the classic Humdrum **kern representation.

        Args:
            token (Token): Token to be tokenized.

        Returns (str): Normalized kern string representation. This is the classic Humdrum **kern representation.

        Examples:
            >>> token.encoding
            '2@.@bb@-·_·L'
            >>> KernTokenizer().tokenize(token)
            '2.bb-_L'
        """
        return EkernTokenizer(token_categories=self.token_categories).tokenize(token).replace(TOKEN_SEPARATOR, '').replace(DECORATION_SEPARATOR, '')

__init__(*, token_categories)

Create a new KernTokenizer.

Parameters:

Name Type Description Default
token_categories Set[TokenCategory]

List of categories to be tokenized. If None will raise an exception.

required
Source code in kernpy/core/tokenizers.py
86
87
88
89
90
91
92
93
def __init__(self, *, token_categories: Set['TokenCategory']):
    """
    Create a new KernTokenizer.

    Args:
        token_categories (Set[TokenCategory]): List of categories to be tokenized. If None will raise an exception.
    """
    super().__init__(token_categories=token_categories)

tokenize(token)

Tokenize a token into a normalized kern string representation. This format is the classic Humdrum **kern representation.

Parameters:

Name Type Description Default
token Token

Token to be tokenized.

required

Returns (str): Normalized kern string representation. This is the classic Humdrum **kern representation.

Examples:

>>> token.encoding
'2@.@bb@-·_·L'
>>> KernTokenizer().tokenize(token)
'2.bb-_L'
Source code in kernpy/core/tokenizers.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
def tokenize(self, token: Token) -> str:
    """
    Tokenize a token into a normalized kern string representation.
    This format is the classic Humdrum **kern representation.

    Args:
        token (Token): Token to be tokenized.

    Returns (str): Normalized kern string representation. This is the classic Humdrum **kern representation.

    Examples:
        >>> token.encoding
        '2@.@bb@-·_·L'
        >>> KernTokenizer().tokenize(token)
        '2.bb-_L'
    """
    return EkernTokenizer(token_categories=self.token_categories).tokenize(token).replace(TOKEN_SEPARATOR, '').replace(DECORATION_SEPARATOR, '')

KeySignatureToken

Bases: SignatureToken

KeySignatureToken class.

Source code in kernpy/core/tokens.py
1690
1691
1692
1693
1694
1695
1696
class KeySignatureToken(SignatureToken):
    """
    KeySignatureToken class.
    """

    def __init__(self, encoding):
        super().__init__(encoding, TokenCategory.KEY_SIGNATURE)

KeyToken

Bases: SignatureToken

KeyToken class.

Source code in kernpy/core/tokens.py
1699
1700
1701
1702
1703
1704
1705
class KeyToken(SignatureToken):
    """
    KeyToken class.
    """

    def __init__(self, encoding):
        super().__init__(encoding, TokenCategory.KEY_TOKEN)

MHXMToken

Bases: Token

MHXMToken class.

Source code in kernpy/core/tokens.py
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
class MHXMToken(Token):
    """
    MHXMToken class.
    """
    def __init__(self, encoding):
        super().__init__(encoding, TokenCategory.MHXM)

    # TODO: Implement constructor
    def export(self, **kwargs) -> str:
        return self.encoding

MensSpineImporter

Bases: SpineImporter

Source code in kernpy/core/mens_spine_importer.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class MensSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        MensSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        raise NotImplementedError()

    def import_token(self, encoding: str) -> Token:
        raise NotImplementedError()

__init__(verbose=False)

MensSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/mens_spine_importer.py
10
11
12
13
14
15
16
17
def __init__(self, verbose: Optional[bool] = False):
    """
    MensSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

MetacommentToken

Bases: SimpleToken

MetacommentToken class stores the metacomments of the score. Usually these are comments starting with !!.

Source code in kernpy/core/tokens.py
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
class MetacommentToken(SimpleToken):
    """
    MetacommentToken class stores the metacomments of the score.
    Usually these are comments starting with `!!`.

    """

    def __init__(self, encoding: str):
        """
        Constructor for the MetacommentToken class.

        Args:
            encoding (str): The original representation of the token.
        """
        super().__init__(encoding, TokenCategory.LINE_COMMENTS)

__init__(encoding)

Constructor for the MetacommentToken class.

Parameters:

Name Type Description Default
encoding str

The original representation of the token.

required
Source code in kernpy/core/tokens.py
1548
1549
1550
1551
1552
1553
1554
1555
def __init__(self, encoding: str):
    """
    Constructor for the MetacommentToken class.

    Args:
        encoding (str): The original representation of the token.
    """
    super().__init__(encoding, TokenCategory.LINE_COMMENTS)

MeterSymbolToken

Bases: SignatureToken

MeterSymbolToken class.

Source code in kernpy/core/tokens.py
1681
1682
1683
1684
1685
1686
1687
class MeterSymbolToken(SignatureToken):
    """
    MeterSymbolToken class.
    """

    def __init__(self, encoding):
        super().__init__(encoding, TokenCategory.METER_SYMBOL)

MultistageTree

MultistageTree class.

Source code in kernpy/core/document.py
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
class MultistageTree:
    """
    MultistageTree class.
    """

    def __init__(self):
        """
        Constructor for MultistageTree class.

        Create an empty Node object to serve as the root, \
        and start the stages list by placing this root node inside a new list.

        """
        self.root = Node(0, None, None, None, None, None)
        self.stages = []  # First stage (0-index) is the root (Node with None token and header_node). The core header is in stage 1.
        self.stages.append([self.root])

    def add_node(
            self,
            stage: int,
            parent: Node,
            token: Optional[AbstractToken],
            last_spine_operator_node: Optional[Node],
            previous_signature_nodes: Optional[SignatureNodes],
            header_node: Optional[Node] = None
    ) -> Node:
        """
        Add a new node to the tree.
        Args:
            stage (int):
            parent (Node):
            token (Optional[AbstractToken]):
            last_spine_operator_node (Optional[Node]):
            previous_signature_nodes (Optional[SignatureNodes]):
            header_node (Optional[Node]):

        Returns: Node - The added node object.

        """
        node = Node(stage, token, parent, last_spine_operator_node, previous_signature_nodes, header_node)
        if stage == len(self.stages):
            self.stages.append([node])
        elif stage > len(self.stages):
            raise ValueError(f'Cannot add node in stage {stage} when there are only {len(self.stages)} stages')
        else:
            self.stages[stage].append(node)

        parent.children.append(node)
        return node

    def dfs(self, visit_method) -> None:
        """
        Depth-first search (DFS)

        Args:
            visit_method (TreeTraversalInterface): The tree traversal interface.

        Returns: None

        """
        self.root.dfs(visit_method)

    def dfs_iterative(self, visit_method) -> None:
        """
        Depth-first search (DFS). Iterative version.

        Args:
            visit_method (TreeTraversalInterface): The tree traversal interface.

        Returns: None

        """
        self.root.dfs_iterative(visit_method)

    def __deepcopy__(self, memo):
        """
        Create a deep copy of the MultistageTree object.
        """
        # Create a new empty MultistageTree object
        new_tree = MultistageTree()

        # Deepcopy the root
        new_tree.root = deepcopy(self.root, memo)

        # Deepcopy the stages list
        new_tree.stages = deepcopy(self.stages, memo)

        return new_tree

__deepcopy__(memo)

Create a deep copy of the MultistageTree object.

Source code in kernpy/core/document.py
322
323
324
325
326
327
328
329
330
331
332
333
334
335
def __deepcopy__(self, memo):
    """
    Create a deep copy of the MultistageTree object.
    """
    # Create a new empty MultistageTree object
    new_tree = MultistageTree()

    # Deepcopy the root
    new_tree.root = deepcopy(self.root, memo)

    # Deepcopy the stages list
    new_tree.stages = deepcopy(self.stages, memo)

    return new_tree

__init__()

Constructor for MultistageTree class.

Create an empty Node object to serve as the root, and start the stages list by placing this root node inside a new list.

Source code in kernpy/core/document.py
253
254
255
256
257
258
259
260
261
262
263
def __init__(self):
    """
    Constructor for MultistageTree class.

    Create an empty Node object to serve as the root, \
    and start the stages list by placing this root node inside a new list.

    """
    self.root = Node(0, None, None, None, None, None)
    self.stages = []  # First stage (0-index) is the root (Node with None token and header_node). The core header is in stage 1.
    self.stages.append([self.root])

add_node(stage, parent, token, last_spine_operator_node, previous_signature_nodes, header_node=None)

Add a new node to the tree. Args: stage (int): parent (Node): token (Optional[AbstractToken]): last_spine_operator_node (Optional[Node]): previous_signature_nodes (Optional[SignatureNodes]): header_node (Optional[Node]):

Returns: Node - The added node object.

Source code in kernpy/core/document.py
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
def add_node(
        self,
        stage: int,
        parent: Node,
        token: Optional[AbstractToken],
        last_spine_operator_node: Optional[Node],
        previous_signature_nodes: Optional[SignatureNodes],
        header_node: Optional[Node] = None
) -> Node:
    """
    Add a new node to the tree.
    Args:
        stage (int):
        parent (Node):
        token (Optional[AbstractToken]):
        last_spine_operator_node (Optional[Node]):
        previous_signature_nodes (Optional[SignatureNodes]):
        header_node (Optional[Node]):

    Returns: Node - The added node object.

    """
    node = Node(stage, token, parent, last_spine_operator_node, previous_signature_nodes, header_node)
    if stage == len(self.stages):
        self.stages.append([node])
    elif stage > len(self.stages):
        raise ValueError(f'Cannot add node in stage {stage} when there are only {len(self.stages)} stages')
    else:
        self.stages[stage].append(node)

    parent.children.append(node)
    return node

dfs(visit_method)

Depth-first search (DFS)

Parameters:

Name Type Description Default
visit_method TreeTraversalInterface

The tree traversal interface.

required

Returns: None

Source code in kernpy/core/document.py
298
299
300
301
302
303
304
305
306
307
308
def dfs(self, visit_method) -> None:
    """
    Depth-first search (DFS)

    Args:
        visit_method (TreeTraversalInterface): The tree traversal interface.

    Returns: None

    """
    self.root.dfs(visit_method)

dfs_iterative(visit_method)

Depth-first search (DFS). Iterative version.

Parameters:

Name Type Description Default
visit_method TreeTraversalInterface

The tree traversal interface.

required

Returns: None

Source code in kernpy/core/document.py
310
311
312
313
314
315
316
317
318
319
320
def dfs_iterative(self, visit_method) -> None:
    """
    Depth-first search (DFS). Iterative version.

    Args:
        visit_method (TreeTraversalInterface): The tree traversal interface.

    Returns: None

    """
    self.root.dfs_iterative(visit_method)

MxhmSpineImporter

Bases: SpineImporter

Source code in kernpy/core/mhxm_spine_importer.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
class MxhmSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.HARMONY)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.HARMONY)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/mhxm_spine_importer.py
11
12
13
14
15
16
17
18
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

Node

Node class.

This class represents a node in a tree. The Node class is responsible for storing the main information of the **kern file.

Attributes:

Name Type Description
id(int)

The unique id of the node.

token(Optional[AbstractToken])

The specific token of the node. The token can be a KeyToken, MeterSymbolToken, etc...

parent(Optional['Node'])

A reference to the parent Node. If the parent is the root, the parent is None.

children(List['Node'])

A list of the children Node.

stage(int)

The stage of the node in the tree. The stage is similar to a row in the **kern file.

last_spine_operator_node(Optional['Node'])

The last spine operator node.

last_signature_nodes(Optional[SignatureNodes])

A reference to the last SignatureNodes instance.

header_node(Optional['Node'])

The header node.

Source code in kernpy/core/document.py
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
class Node:
    """
    Node class.

    This class represents a node in a tree.
    The `Node` class is responsible for storing the main information of the **kern file.

    Attributes:
        id(int): The unique id of the node.
        token(Optional[AbstractToken]): The specific token of the node. The token can be a `KeyToken`, `MeterSymbolToken`, etc...
        parent(Optional['Node']): A reference to the parent `Node`. If the parent is the root, the parent is None.
        children(List['Node']): A list of the children `Node`.
        stage(int): The stage of the node in the tree. The stage is similar to a row in the **kern file.
        last_spine_operator_node(Optional['Node']): The last spine operator node.
        last_signature_nodes(Optional[SignatureNodes]): A reference to the last `SignatureNodes` instance.
        header_node(Optional['Node']): The header node.
    """
    NextID = 1  # static counter

    def __init__(self,
                 stage: int,
                 token: Optional[AbstractToken],
                 parent: Optional['Node'],
                 last_spine_operator_node: Optional['Node'],
                 last_signature_nodes: Optional[SignatureNodes],
                 header_node: Optional['Node']
                 ):
        """
        Create an instance of Node.

        Args:
            stage (int): The stage of the node in the tree. The stage is similar to a row in the **kern file.
            token (Optional[AbstractToken]): The specific token of the node. The token can be a `KeyToken`, `MeterSymbolToken`, etc...
            parent (Optional['Node']): A reference to the parent `Node`. If the parent is the root, the parent is None.
            last_spine_operator_node (Optional['Node']): The last spine operator node.
            last_signature_nodes (Optional[SignatureNodes]): A reference to the last `SignatureNodes` instance.
            header_node (Optional['Node']): The header node.
        """
        self.id = Node.NextID
        Node.NextID += 1
        self.token = token
        self.parent = parent
        self.children = []
        self.stage = stage
        self.header_node = header_node
        if last_signature_nodes is not None:
            self.last_signature_nodes = last_signature_nodes.clone()  #TODO Documentar todo esto - composición
            # self.last_signature_nodes = copy.deepcopy(last_signature_nodes) # TODO: Ver en SignatureNodes.clone
        else:
            self.last_signature_nodes = SignatureNodes()
        self.last_spine_operator_node = last_spine_operator_node

    def count_nodes_by_stage(self) -> List[int]:
        """
        Count the number of nodes in each stage of the tree.

        Examples:
            >>> node = Node(0, None, None, None, None, None)
            >>> ...
            >>> node.count_nodes_by_stage()
            [2, 2, 2, 2, 3, 3, 3, 2]

        Returns:
            List[int]: A list with the number of nodes in each stage of the tree.
        """
        level_counts = defaultdict(int)
        queue = deque([(self, 0)])  # (node, level)
        # breadth-first search (BFS)
        while queue:
            node, level = queue.popleft()
            level_counts[level] += 1
            for child in node.children:
                queue.append((child, level + 1))

        # Convert the level_counts dictionary to a list of counts
        max_level = max(level_counts.keys())
        counts_by_level = [level_counts[level] for level in range(max_level + 1)]

        return counts_by_level

    def dfs(self, tree_traversal: TreeTraversalInterface):
        """
        Depth-first search (DFS)

        Args:
            tree_traversal (TreeTraversalInterface): The tree traversal interface. Object used to visit the nodes of the tree.
        """
        node = self
        tree_traversal.visit(node)
        for child in self.children:
            child.dfs(tree_traversal)

    def dfs_iterative(self, tree_traversal: TreeTraversalInterface):
        """
        Depth-first search (DFS). Iterative version.

        Args:
            tree_traversal (TreeTraversalInterface): The tree traversal interface. Object used to visit the nodes of the tree.

        Returns: None
        """
        stack = [self]
        while stack:
            node = stack.pop()
            tree_traversal.visit(node)
            stack.extend(reversed(node.children))  # Add children in reverse order to maintain DFS order

    def __eq__(self, other):
        """
        Compare two nodes.

        Args:
            other: The other node to compare.

        Returns: True if the nodes are equal, False otherwise.
        """
        if other is None or not isinstance(other, Node):
            return False

        return self.id == other.id

    def __ne__(self, other):
        """
        Compare two nodes.

        Args:
            other: The other node to compare.

        Returns: True if the nodes are not equal, False otherwise.
        """
        return not self.__eq__(other)

    def __hash__(self):
        """
        Get the hash of the node.

        Returns: The hash of the node.
        """
        return hash(self.id)

    def __str__(self):
        """
        Get the string representation of the node.

        Returns: The string representation of the node.
        """
        return f"{{{self.stage}: {self.token}}}"

__eq__(other)

Compare two nodes.

Parameters:

Name Type Description Default
other

The other node to compare.

required

Returns: True if the nodes are equal, False otherwise.

Source code in kernpy/core/document.py
182
183
184
185
186
187
188
189
190
191
192
193
194
def __eq__(self, other):
    """
    Compare two nodes.

    Args:
        other: The other node to compare.

    Returns: True if the nodes are equal, False otherwise.
    """
    if other is None or not isinstance(other, Node):
        return False

    return self.id == other.id

__hash__()

Get the hash of the node.

Returns: The hash of the node.

Source code in kernpy/core/document.py
207
208
209
210
211
212
213
def __hash__(self):
    """
    Get the hash of the node.

    Returns: The hash of the node.
    """
    return hash(self.id)

__init__(stage, token, parent, last_spine_operator_node, last_signature_nodes, header_node)

Create an instance of Node.

Parameters:

Name Type Description Default
stage int

The stage of the node in the tree. The stage is similar to a row in the **kern file.

required
token Optional[AbstractToken]

The specific token of the node. The token can be a KeyToken, MeterSymbolToken, etc...

required
parent Optional['Node']

A reference to the parent Node. If the parent is the root, the parent is None.

required
last_spine_operator_node Optional['Node']

The last spine operator node.

required
last_signature_nodes Optional[SignatureNodes]

A reference to the last SignatureNodes instance.

required
header_node Optional['Node']

The header node.

required
Source code in kernpy/core/document.py
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
def __init__(self,
             stage: int,
             token: Optional[AbstractToken],
             parent: Optional['Node'],
             last_spine_operator_node: Optional['Node'],
             last_signature_nodes: Optional[SignatureNodes],
             header_node: Optional['Node']
             ):
    """
    Create an instance of Node.

    Args:
        stage (int): The stage of the node in the tree. The stage is similar to a row in the **kern file.
        token (Optional[AbstractToken]): The specific token of the node. The token can be a `KeyToken`, `MeterSymbolToken`, etc...
        parent (Optional['Node']): A reference to the parent `Node`. If the parent is the root, the parent is None.
        last_spine_operator_node (Optional['Node']): The last spine operator node.
        last_signature_nodes (Optional[SignatureNodes]): A reference to the last `SignatureNodes` instance.
        header_node (Optional['Node']): The header node.
    """
    self.id = Node.NextID
    Node.NextID += 1
    self.token = token
    self.parent = parent
    self.children = []
    self.stage = stage
    self.header_node = header_node
    if last_signature_nodes is not None:
        self.last_signature_nodes = last_signature_nodes.clone()  #TODO Documentar todo esto - composición
        # self.last_signature_nodes = copy.deepcopy(last_signature_nodes) # TODO: Ver en SignatureNodes.clone
    else:
        self.last_signature_nodes = SignatureNodes()
    self.last_spine_operator_node = last_spine_operator_node

__ne__(other)

Compare two nodes.

Parameters:

Name Type Description Default
other

The other node to compare.

required

Returns: True if the nodes are not equal, False otherwise.

Source code in kernpy/core/document.py
196
197
198
199
200
201
202
203
204
205
def __ne__(self, other):
    """
    Compare two nodes.

    Args:
        other: The other node to compare.

    Returns: True if the nodes are not equal, False otherwise.
    """
    return not self.__eq__(other)

__str__()

Get the string representation of the node.

Returns: The string representation of the node.

Source code in kernpy/core/document.py
215
216
217
218
219
220
221
def __str__(self):
    """
    Get the string representation of the node.

    Returns: The string representation of the node.
    """
    return f"{{{self.stage}: {self.token}}}"

count_nodes_by_stage()

Count the number of nodes in each stage of the tree.

Examples:

>>> node = Node(0, None, None, None, None, None)
>>> ...
>>> node.count_nodes_by_stage()
[2, 2, 2, 2, 3, 3, 3, 2]

Returns:

Type Description
List[int]

List[int]: A list with the number of nodes in each stage of the tree.

Source code in kernpy/core/document.py
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
def count_nodes_by_stage(self) -> List[int]:
    """
    Count the number of nodes in each stage of the tree.

    Examples:
        >>> node = Node(0, None, None, None, None, None)
        >>> ...
        >>> node.count_nodes_by_stage()
        [2, 2, 2, 2, 3, 3, 3, 2]

    Returns:
        List[int]: A list with the number of nodes in each stage of the tree.
    """
    level_counts = defaultdict(int)
    queue = deque([(self, 0)])  # (node, level)
    # breadth-first search (BFS)
    while queue:
        node, level = queue.popleft()
        level_counts[level] += 1
        for child in node.children:
            queue.append((child, level + 1))

    # Convert the level_counts dictionary to a list of counts
    max_level = max(level_counts.keys())
    counts_by_level = [level_counts[level] for level in range(max_level + 1)]

    return counts_by_level

dfs(tree_traversal)

Depth-first search (DFS)

Parameters:

Name Type Description Default
tree_traversal TreeTraversalInterface

The tree traversal interface. Object used to visit the nodes of the tree.

required
Source code in kernpy/core/document.py
155
156
157
158
159
160
161
162
163
164
165
def dfs(self, tree_traversal: TreeTraversalInterface):
    """
    Depth-first search (DFS)

    Args:
        tree_traversal (TreeTraversalInterface): The tree traversal interface. Object used to visit the nodes of the tree.
    """
    node = self
    tree_traversal.visit(node)
    for child in self.children:
        child.dfs(tree_traversal)

dfs_iterative(tree_traversal)

Depth-first search (DFS). Iterative version.

Parameters:

Name Type Description Default
tree_traversal TreeTraversalInterface

The tree traversal interface. Object used to visit the nodes of the tree.

required

Returns: None

Source code in kernpy/core/document.py
167
168
169
170
171
172
173
174
175
176
177
178
179
180
def dfs_iterative(self, tree_traversal: TreeTraversalInterface):
    """
    Depth-first search (DFS). Iterative version.

    Args:
        tree_traversal (TreeTraversalInterface): The tree traversal interface. Object used to visit the nodes of the tree.

    Returns: None
    """
    stack = [self]
    while stack:
        node = stack.pop()
        tree_traversal.visit(node)
        stack.extend(reversed(node.children))  # Add children in reverse order to maintain DFS order

NoteRestToken

Bases: ComplexToken

NoteRestToken class.

Attributes:

Name Type Description
pitch_duration_subtokens list

The subtokens for the pitch and duration

decoration_subtokens list

The subtokens for the decorations

Source code in kernpy/core/tokens.py
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
class NoteRestToken(ComplexToken):
    """
    NoteRestToken class.

    Attributes:
        pitch_duration_subtokens (list): The subtokens for the pitch and duration
        decoration_subtokens (list): The subtokens for the decorations
    """

    def __init__(
            self,
            encoding: str,
            pitch_duration_subtokens: List[Subtoken],
            decoration_subtokens: List[Subtoken]
    ):
        """
        NoteRestToken constructor.

        Args:
            encoding (str): The complete unprocessed encoding
            pitch_duration_subtokens (List[Subtoken])y: The subtokens for the pitch and duration
            decoration_subtokens (List[Subtoken]): The subtokens for the decorations. Individual elements of the token, of type Subtoken
        """
        super().__init__(encoding, TokenCategory.NOTE_REST)
        if not pitch_duration_subtokens or len(pitch_duration_subtokens) == 0:
            raise ValueError('Empty name-duration subtokens')

        for subtoken in pitch_duration_subtokens:
            if not isinstance(subtoken, Subtoken):
                raise ValueError(f'All pitch-duration subtokens must be instances of Subtoken. Found {type(subtoken)}')
        for subtoken in decoration_subtokens:
            if not isinstance(subtoken, Subtoken):
                raise ValueError(f'All decoration subtokens must be instances of Subtoken. Found {type(subtoken)}')

        self.pitch_duration_subtokens = pitch_duration_subtokens
        self.decoration_subtokens = decoration_subtokens

    def export(self, **kwargs) -> str:
        """
        Exports the token.

        Keyword Arguments:
            filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
                indicating whether the token should be included in the export. If provided, only tokens for which the
                function returns True will be exported. Defaults to None. If None, all tokens will be exported.

        Returns (str): The exported token.

        """
        filter_categories_fn = kwargs.get('filter_categories', None)

        # Filter subcategories
        pitch_duration_tokens = {
            subtoken for subtoken in self.pitch_duration_subtokens
            if filter_categories_fn is None or filter_categories_fn(subtoken.category)
        }
        decoration_tokens = {
            subtoken for subtoken in self.decoration_subtokens
            if filter_categories_fn is None or filter_categories_fn(subtoken.category)
        }
        pitch_duration_tokens_sorted = sorted(pitch_duration_tokens, key=lambda t:  (t.category.value, t.encoding))
        decoration_tokens_sorted     = sorted(decoration_tokens,     key=lambda t:  (t.category.value, t.encoding))

        # Join the sorted subtokens
        pitch_duration_part = TOKEN_SEPARATOR.join([subtoken.encoding for subtoken in pitch_duration_tokens_sorted])
        decoration_part = DECORATION_SEPARATOR.join([subtoken.encoding for subtoken in decoration_tokens_sorted])

        result = pitch_duration_part
        if len(decoration_part):
            result += DECORATION_SEPARATOR + decoration_part

        return result if len(result) > 0 else EMPTY_TOKEN

__init__(encoding, pitch_duration_subtokens, decoration_subtokens)

NoteRestToken constructor.

Parameters:

Name Type Description Default
encoding str

The complete unprocessed encoding

required
pitch_duration_subtokens List[Subtoken])y

The subtokens for the pitch and duration

required
decoration_subtokens List[Subtoken]

The subtokens for the decorations. Individual elements of the token, of type Subtoken

required
Source code in kernpy/core/tokens.py
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
def __init__(
        self,
        encoding: str,
        pitch_duration_subtokens: List[Subtoken],
        decoration_subtokens: List[Subtoken]
):
    """
    NoteRestToken constructor.

    Args:
        encoding (str): The complete unprocessed encoding
        pitch_duration_subtokens (List[Subtoken])y: The subtokens for the pitch and duration
        decoration_subtokens (List[Subtoken]): The subtokens for the decorations. Individual elements of the token, of type Subtoken
    """
    super().__init__(encoding, TokenCategory.NOTE_REST)
    if not pitch_duration_subtokens or len(pitch_duration_subtokens) == 0:
        raise ValueError('Empty name-duration subtokens')

    for subtoken in pitch_duration_subtokens:
        if not isinstance(subtoken, Subtoken):
            raise ValueError(f'All pitch-duration subtokens must be instances of Subtoken. Found {type(subtoken)}')
    for subtoken in decoration_subtokens:
        if not isinstance(subtoken, Subtoken):
            raise ValueError(f'All decoration subtokens must be instances of Subtoken. Found {type(subtoken)}')

    self.pitch_duration_subtokens = pitch_duration_subtokens
    self.decoration_subtokens = decoration_subtokens

export(**kwargs)

Exports the token.

Other Parameters:

Name Type Description
filter_categories Optional[Callable[[TokenCategory], bool]]

A function that takes a TokenCategory and returns a boolean indicating whether the token should be included in the export. If provided, only tokens for which the function returns True will be exported. Defaults to None. If None, all tokens will be exported.

Returns (str): The exported token.

Source code in kernpy/core/tokens.py
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
def export(self, **kwargs) -> str:
    """
    Exports the token.

    Keyword Arguments:
        filter_categories (Optional[Callable[[TokenCategory], bool]]): A function that takes a TokenCategory and returns a boolean
            indicating whether the token should be included in the export. If provided, only tokens for which the
            function returns True will be exported. Defaults to None. If None, all tokens will be exported.

    Returns (str): The exported token.

    """
    filter_categories_fn = kwargs.get('filter_categories', None)

    # Filter subcategories
    pitch_duration_tokens = {
        subtoken for subtoken in self.pitch_duration_subtokens
        if filter_categories_fn is None or filter_categories_fn(subtoken.category)
    }
    decoration_tokens = {
        subtoken for subtoken in self.decoration_subtokens
        if filter_categories_fn is None or filter_categories_fn(subtoken.category)
    }
    pitch_duration_tokens_sorted = sorted(pitch_duration_tokens, key=lambda t:  (t.category.value, t.encoding))
    decoration_tokens_sorted     = sorted(decoration_tokens,     key=lambda t:  (t.category.value, t.encoding))

    # Join the sorted subtokens
    pitch_duration_part = TOKEN_SEPARATOR.join([subtoken.encoding for subtoken in pitch_duration_tokens_sorted])
    decoration_part = DECORATION_SEPARATOR.join([subtoken.encoding for subtoken in decoration_tokens_sorted])

    result = pitch_duration_part
    if len(decoration_part):
        result += DECORATION_SEPARATOR + decoration_part

    return result if len(result) > 0 else EMPTY_TOKEN

PitchPositionReferenceSystem

Source code in kernpy/core/gkern.py
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
class PitchPositionReferenceSystem:
    def __init__(self, base_pitch: AgnosticPitch):
        """
        Initializes the PitchPositionReferenceSystem object.
        Args:
            base_pitch (AgnosticPitch): The AgnosticPitch in the first line of the Staff. \
             The AgnosticPitch object that serves as the reference point for the system.
        """
        self.base_pitch = base_pitch

    def compute_position(self, pitch: AgnosticPitch) -> PositionInStaff:
        """
        Computes the position in staff for the given pitch.
        Args:
            pitch (AgnosticPitch): The AgnosticPitch object to compute the position for.
        Returns:
            PositionInStaff: The PositionInStaff object representing the computed position.
        """
        # map A–G to 0–6
        LETTER_TO_INDEX = {'C': 0, 'D': 1, 'E': 2,
                           'F': 3, 'G': 4, 'A': 5, 'B': 6}

        # strip off any '+' or '-' accidentals, then grab the letter
        def letter(p: AgnosticPitch) -> str:
            name = p.name.replace('+', '').replace('-', '')
            return AgnosticPitch(name, p.octave).name

        base_letter_idx = LETTER_TO_INDEX[letter(self.base_pitch)]
        target_letter_idx = LETTER_TO_INDEX[letter(pitch)]

        # "octave difference × 7" plus the letter‐index difference
        diatonic_steps = (pitch.octave - self.base_pitch.octave) * 7 \
                         + (target_letter_idx - base_letter_idx)

        # that many "lines or spaces" above (or below) the reference line
        return PositionInStaff(diatonic_steps)

__init__(base_pitch)

Initializes the PitchPositionReferenceSystem object. Args: base_pitch (AgnosticPitch): The AgnosticPitch in the first line of the Staff. The AgnosticPitch object that serves as the reference point for the system.

Source code in kernpy/core/gkern.py
221
222
223
224
225
226
227
228
def __init__(self, base_pitch: AgnosticPitch):
    """
    Initializes the PitchPositionReferenceSystem object.
    Args:
        base_pitch (AgnosticPitch): The AgnosticPitch in the first line of the Staff. \
         The AgnosticPitch object that serves as the reference point for the system.
    """
    self.base_pitch = base_pitch

compute_position(pitch)

Computes the position in staff for the given pitch. Args: pitch (AgnosticPitch): The AgnosticPitch object to compute the position for. Returns: PositionInStaff: The PositionInStaff object representing the computed position.

Source code in kernpy/core/gkern.py
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
def compute_position(self, pitch: AgnosticPitch) -> PositionInStaff:
    """
    Computes the position in staff for the given pitch.
    Args:
        pitch (AgnosticPitch): The AgnosticPitch object to compute the position for.
    Returns:
        PositionInStaff: The PositionInStaff object representing the computed position.
    """
    # map A–G to 0–6
    LETTER_TO_INDEX = {'C': 0, 'D': 1, 'E': 2,
                       'F': 3, 'G': 4, 'A': 5, 'B': 6}

    # strip off any '+' or '-' accidentals, then grab the letter
    def letter(p: AgnosticPitch) -> str:
        name = p.name.replace('+', '').replace('-', '')
        return AgnosticPitch(name, p.octave).name

    base_letter_idx = LETTER_TO_INDEX[letter(self.base_pitch)]
    target_letter_idx = LETTER_TO_INDEX[letter(pitch)]

    # "octave difference × 7" plus the letter‐index difference
    diatonic_steps = (pitch.octave - self.base_pitch.octave) * 7 \
                     + (target_letter_idx - base_letter_idx)

    # that many "lines or spaces" above (or below) the reference line
    return PositionInStaff(diatonic_steps)

PitchRest

Represents a name or a rest in a note.

The name is represented using the International Standard Organization (ISO) name notation. The first line below the staff is the C4 in G clef. The above C is C5, the below C is C3, etc.

The Humdrum Kern format uses the following name representation: 'c' = C4 'cc' = C5 'ccc' = C6 'cccc' = C7

'C' = C3 'CC' = C2 'CCC' = C1

The rests are represented by the letter 'r'. The rests do not have name.

This class do not limit the name ranges.

In the following example, the name is represented by the letter 'c'. The name of 'c' is C4, 'cc' is C5, 'ccc' is C6.

**kern
*clefG2
2c          // C4
2cc         // C5
2ccc        // C6
2C          // C3
2CC         // C2
2CCC        // C1
*-
Source code in kernpy/core/tokens.py
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
class PitchRest:
    """
    Represents a name or a rest in a note.

    The name is represented using the International Standard Organization (ISO) name notation.
    The first line below the staff is the C4 in G clef. The above C is C5, the below C is C3, etc.

    The Humdrum Kern format uses the following name representation:
    'c' = C4
    'cc' = C5
    'ccc' = C6
    'cccc' = C7

    'C' = C3
    'CC' = C2
    'CCC' = C1

    The rests are represented by the letter 'r'. The rests do not have name.

    This class do not limit the name ranges.


    In the following example, the name is represented by the letter 'c'. The name of 'c' is C4, 'cc' is C5, 'ccc' is C6.
    ```
    **kern
    *clefG2
    2c          // C4
    2cc         // C5
    2ccc        // C6
    2C          // C3
    2CC         // C2
    2CCC        // C1
    *-
    ```
    """
    C4_PITCH_LOWERCASE = 'c'
    C4_OCATAVE = 4
    C3_PITCH_UPPERCASE = 'C'
    C3_OCATAVE = 3
    REST_CHARACTER = 'r'

    VALID_PITCHES = 'abcdefg' + 'ABCDEFG' + REST_CHARACTER

    def __init__(self, raw_pitch: str):
        """
        Create a new PitchRest object.

        Args:
            raw_pitch (str): name representation in Humdrum Kern format

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest = PitchRest('DDD')
        """
        if raw_pitch is None or len(raw_pitch) == 0:
            raise ValueError(f'Empty name: name can not be None or empty. But {raw_pitch} was provided.')

        self.encoding = raw_pitch
        self.pitch, self.octave = self.__parse_pitch_octave()

    def __parse_pitch_octave(self) -> (str, int):
        if self.encoding == PitchRest.REST_CHARACTER:
            return PitchRest.REST_CHARACTER, None

        if self.encoding.islower():
            min_octave = PitchRest.C4_OCATAVE
            octave = min_octave + (len(self.encoding) - 1)
            pitch = self.encoding[0].lower()
            return pitch, octave

        if self.encoding.isupper():
            max_octave = PitchRest.C3_OCATAVE
            octave = max_octave - (len(self.encoding) - 1)
            pitch = self.encoding[0].lower()
            return pitch, octave

        raise ValueError(f'Invalid name: name {self.encoding} is not a valid name representation.')

    def is_rest(self) -> bool:
        """
        Check if the name is a rest.

        Returns:
            bool: True if the name is a rest, False otherwise.

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest.is_rest()
            False
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest.is_rest()
            True
        """
        return self.octave is None

    @staticmethod
    def pitch_comparator(pitch_a: str, pitch_b: str) -> int:
        """
        Compare two pitches of the same octave.

        The lower name is 'a'. So 'a' < 'b' < 'c' < 'd' < 'e' < 'f' < 'g'

        Args:
            pitch_a: One name of 'abcdefg'
            pitch_b: Another name of 'abcdefg'

        Returns:
            -1 if pitch1 is lower than pitch2
            0 if pitch1 is equal to pitch2
            1 if pitch1 is higher than pitch2

        Examples:
            >>> PitchRest.pitch_comparator('c', 'c')
            0
            >>> PitchRest.pitch_comparator('c', 'd')
            -1
            >>> PitchRest.pitch_comparator('d', 'c')
            1
        """
        if pitch_a < pitch_b:
            return -1
        if pitch_a > pitch_b:
            return 1
        return 0

    def __str__(self):
        return f'{self.encoding}'

    def __repr__(self):
        return f'[PitchRest: {self.encoding}, name={self.pitch}, octave={self.octave}]'

    def __eq__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches and rests.

        Args:
            other (PitchRest): The other name to compare

        Returns (bool):
            True if the pitches are equal, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest == pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('ccc')
            >>> pitch_rest == pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest == pitch_rest2
            False
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest == pitch_rest2
            True

        """
        if not isinstance(other, PitchRest):
            return False
        if self.is_rest() and other.is_rest():
            return True
        if self.is_rest() or other.is_rest():
            return False
        return self.pitch == other.pitch and self.octave == other.octave

    def __ne__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches and rests.
        Args:
            other (PitchRest): The other name to compare

        Returns (bool):
            True if the pitches are different, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest != pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('ccc')
            >>> pitch_rest != pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest != pitch_rest2
            True
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest != pitch_rest2
            False
        """
        return not self.__eq__(other)

    def __gt__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches.

        If any of the pitches is a rest, the comparison raise an exception.

        Args:
            other (PitchRest): The other name to compare

        Returns (bool): True if this name is higher than the other, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('d')
            >>> pitch_rest > pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest > pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('b')
            >>> pitch_rest > pitch_rest2
            True
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest > pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest > pitch_rest2
            Traceback (most recent call last):
            ValueError: ...


        """
        if self.is_rest() or other.is_rest():
            raise ValueError(f'Invalid comparison: > operator can not be used to compare name of a rest.\n\
            self={repr(self)} > other={repr(other)}')

        if self.octave > other.octave:
            return True
        if self.octave == other.octave:
            return PitchRest.pitch_comparator(self.pitch, other.pitch) > 0
        return False

    def __lt__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches.

        If any of the pitches is a rest, the comparison raise an exception.

        Args:
            other: The other name to compare

        Returns:
            True if this name is lower than the other, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('d')
            >>> pitch_rest < pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest < pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('b')
            >>> pitch_rest < pitch_rest2
            False
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest < pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest < pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...

        """
        if self.is_rest() or other.is_rest():
            raise ValueError(f'Invalid comparison: < operator can not be used to compare name of a rest.\n\
            self={repr(self)} < other={repr(other)}')

        if self.octave < other.octave:
            return True
        if self.octave == other.octave:
            return PitchRest.pitch_comparator(self.pitch, other.pitch) < 0
        return False

    def __ge__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception.
        Args:
            other (PitchRest): The other name to compare

        Returns (bool):
            True if this name is higher or equal than the other, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('d')
            >>> pitch_rest >= pitch_rest2
            False
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest >= pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('b')
            >>> pitch_rest >= pitch_rest2
            True
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest >= pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest >= pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...


        """
        return self.__gt__(other) or self.__eq__(other)

    def __le__(self, other: 'PitchRest') -> bool:
        """
        Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception.
        Args:
            other (PitchRest): The other name to compare

        Returns (bool): True if this name is lower or equal than the other, False otherwise

        Examples:
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('d')
            >>> pitch_rest <= pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest <= pitch_rest2
            True
            >>> pitch_rest = PitchRest('c')
            >>> pitch_rest2 = PitchRest('b')
            >>> pitch_rest <= pitch_rest2
            False
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('c')
            >>> pitch_rest <= pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...
            >>> pitch_rest = PitchRest('r')
            >>> pitch_rest2 = PitchRest('r')
            >>> pitch_rest <= pitch_rest2
            Traceback (most recent call last):
            ...
            ValueError: ...

        """
        return self.__lt__(other) or self.__eq__(other)

__eq__(other)

Compare two pitches and rests.

Parameters:

Name Type Description Default
other PitchRest

The other name to compare

required

Returns (bool): True if the pitches are equal, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest == pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('ccc')
>>> pitch_rest == pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest == pitch_rest2
False
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest == pitch_rest2
True
Source code in kernpy/core/tokens.py
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
def __eq__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches and rests.

    Args:
        other (PitchRest): The other name to compare

    Returns (bool):
        True if the pitches are equal, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest == pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('ccc')
        >>> pitch_rest == pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest == pitch_rest2
        False
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest == pitch_rest2
        True

    """
    if not isinstance(other, PitchRest):
        return False
    if self.is_rest() and other.is_rest():
        return True
    if self.is_rest() or other.is_rest():
        return False
    return self.pitch == other.pitch and self.octave == other.octave

__ge__(other)

Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception. Args: other (PitchRest): The other name to compare

Returns (bool): True if this name is higher or equal than the other, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('d')
>>> pitch_rest >= pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest >= pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('b')
>>> pitch_rest >= pitch_rest2
True
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest >= pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest >= pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
Source code in kernpy/core/tokens.py
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
def __ge__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception.
    Args:
        other (PitchRest): The other name to compare

    Returns (bool):
        True if this name is higher or equal than the other, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('d')
        >>> pitch_rest >= pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest >= pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('b')
        >>> pitch_rest >= pitch_rest2
        True
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest >= pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest >= pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...


    """
    return self.__gt__(other) or self.__eq__(other)

__gt__(other)

Compare two pitches.

If any of the pitches is a rest, the comparison raise an exception.

Parameters:

Name Type Description Default
other PitchRest

The other name to compare

required

Returns (bool): True if this name is higher than the other, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('d')
>>> pitch_rest > pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest > pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('b')
>>> pitch_rest > pitch_rest2
True
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest > pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest > pitch_rest2
Traceback (most recent call last):
ValueError: ...
Source code in kernpy/core/tokens.py
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
def __gt__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches.

    If any of the pitches is a rest, the comparison raise an exception.

    Args:
        other (PitchRest): The other name to compare

    Returns (bool): True if this name is higher than the other, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('d')
        >>> pitch_rest > pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest > pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('b')
        >>> pitch_rest > pitch_rest2
        True
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest > pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest > pitch_rest2
        Traceback (most recent call last):
        ValueError: ...


    """
    if self.is_rest() or other.is_rest():
        raise ValueError(f'Invalid comparison: > operator can not be used to compare name of a rest.\n\
        self={repr(self)} > other={repr(other)}')

    if self.octave > other.octave:
        return True
    if self.octave == other.octave:
        return PitchRest.pitch_comparator(self.pitch, other.pitch) > 0
    return False

__init__(raw_pitch)

Create a new PitchRest object.

Parameters:

Name Type Description Default
raw_pitch str

name representation in Humdrum Kern format

required

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest = PitchRest('r')
>>> pitch_rest = PitchRest('DDD')
Source code in kernpy/core/tokens.py
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
def __init__(self, raw_pitch: str):
    """
    Create a new PitchRest object.

    Args:
        raw_pitch (str): name representation in Humdrum Kern format

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest = PitchRest('DDD')
    """
    if raw_pitch is None or len(raw_pitch) == 0:
        raise ValueError(f'Empty name: name can not be None or empty. But {raw_pitch} was provided.')

    self.encoding = raw_pitch
    self.pitch, self.octave = self.__parse_pitch_octave()

__le__(other)

Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception. Args: other (PitchRest): The other name to compare

Returns (bool): True if this name is lower or equal than the other, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('d')
>>> pitch_rest <= pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest <= pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('b')
>>> pitch_rest <= pitch_rest2
False
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest <= pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest <= pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
Source code in kernpy/core/tokens.py
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def __le__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches. If any of the PitchRest is a rest, the comparison raise an exception.
    Args:
        other (PitchRest): The other name to compare

    Returns (bool): True if this name is lower or equal than the other, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('d')
        >>> pitch_rest <= pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest <= pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('b')
        >>> pitch_rest <= pitch_rest2
        False
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest <= pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest <= pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...

    """
    return self.__lt__(other) or self.__eq__(other)

__lt__(other)

Compare two pitches.

If any of the pitches is a rest, the comparison raise an exception.

Parameters:

Name Type Description Default
other 'PitchRest'

The other name to compare

required

Returns:

Type Description
bool

True if this name is lower than the other, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('d')
>>> pitch_rest < pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest < pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('b')
>>> pitch_rest < pitch_rest2
False
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest < pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest < pitch_rest2
Traceback (most recent call last):
...
ValueError: ...
Source code in kernpy/core/tokens.py
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
def __lt__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches.

    If any of the pitches is a rest, the comparison raise an exception.

    Args:
        other: The other name to compare

    Returns:
        True if this name is lower than the other, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('d')
        >>> pitch_rest < pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest < pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('b')
        >>> pitch_rest < pitch_rest2
        False
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest < pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest < pitch_rest2
        Traceback (most recent call last):
        ...
        ValueError: ...

    """
    if self.is_rest() or other.is_rest():
        raise ValueError(f'Invalid comparison: < operator can not be used to compare name of a rest.\n\
        self={repr(self)} < other={repr(other)}')

    if self.octave < other.octave:
        return True
    if self.octave == other.octave:
        return PitchRest.pitch_comparator(self.pitch, other.pitch) < 0
    return False

__ne__(other)

Compare two pitches and rests. Args: other (PitchRest): The other name to compare

Returns (bool): True if the pitches are different, False otherwise

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('c')
>>> pitch_rest != pitch_rest2
False
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('ccc')
>>> pitch_rest != pitch_rest2
True
>>> pitch_rest = PitchRest('c')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest != pitch_rest2
True
>>> pitch_rest = PitchRest('r')
>>> pitch_rest2 = PitchRest('r')
>>> pitch_rest != pitch_rest2
False
Source code in kernpy/core/tokens.py
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
def __ne__(self, other: 'PitchRest') -> bool:
    """
    Compare two pitches and rests.
    Args:
        other (PitchRest): The other name to compare

    Returns (bool):
        True if the pitches are different, False otherwise

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('c')
        >>> pitch_rest != pitch_rest2
        False
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('ccc')
        >>> pitch_rest != pitch_rest2
        True
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest != pitch_rest2
        True
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest2 = PitchRest('r')
        >>> pitch_rest != pitch_rest2
        False
    """
    return not self.__eq__(other)

is_rest()

Check if the name is a rest.

Returns:

Name Type Description
bool bool

True if the name is a rest, False otherwise.

Examples:

>>> pitch_rest = PitchRest('c')
>>> pitch_rest.is_rest()
False
>>> pitch_rest = PitchRest('r')
>>> pitch_rest.is_rest()
True
Source code in kernpy/core/tokens.py
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
def is_rest(self) -> bool:
    """
    Check if the name is a rest.

    Returns:
        bool: True if the name is a rest, False otherwise.

    Examples:
        >>> pitch_rest = PitchRest('c')
        >>> pitch_rest.is_rest()
        False
        >>> pitch_rest = PitchRest('r')
        >>> pitch_rest.is_rest()
        True
    """
    return self.octave is None

pitch_comparator(pitch_a, pitch_b) staticmethod

Compare two pitches of the same octave.

The lower name is 'a'. So 'a' < 'b' < 'c' < 'd' < 'e' < 'f' < 'g'

Parameters:

Name Type Description Default
pitch_a str

One name of 'abcdefg'

required
pitch_b str

Another name of 'abcdefg'

required

Returns:

Type Description
int

-1 if pitch1 is lower than pitch2

int

0 if pitch1 is equal to pitch2

int

1 if pitch1 is higher than pitch2

Examples:

>>> PitchRest.pitch_comparator('c', 'c')
0
>>> PitchRest.pitch_comparator('c', 'd')
-1
>>> PitchRest.pitch_comparator('d', 'c')
1
Source code in kernpy/core/tokens.py
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
@staticmethod
def pitch_comparator(pitch_a: str, pitch_b: str) -> int:
    """
    Compare two pitches of the same octave.

    The lower name is 'a'. So 'a' < 'b' < 'c' < 'd' < 'e' < 'f' < 'g'

    Args:
        pitch_a: One name of 'abcdefg'
        pitch_b: Another name of 'abcdefg'

    Returns:
        -1 if pitch1 is lower than pitch2
        0 if pitch1 is equal to pitch2
        1 if pitch1 is higher than pitch2

    Examples:
        >>> PitchRest.pitch_comparator('c', 'c')
        0
        >>> PitchRest.pitch_comparator('c', 'd')
        -1
        >>> PitchRest.pitch_comparator('d', 'c')
        1
    """
    if pitch_a < pitch_b:
        return -1
    if pitch_a > pitch_b:
        return 1
    return 0

PositionInStaff

Source code in kernpy/core/gkern.py
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
class PositionInStaff:
    LINE_CHARACTER = 'L'
    SPACE_CHARACTER = 'S'

    def __init__(self, line_space: int):
        """
        Initializes the PositionInStaff object.

        Args:
            line_space (int): 0 for bottom line, -1 for space under bottom line, 1 for space above bottom line. \
             Increments by 1 for each line or space.

        """
        self.line_space = line_space

    @classmethod
    def from_line(cls, line: int) -> PositionInStaff:
        """
        Creates a PositionInStaff object from a line number.

        Args:
            line (int): The line number. line 1 is bottom line, 2 is the 1st line from bottom, 0 is the bottom ledger line

        Returns:
            PositionInStaff: The PositionInStaff object. 0 for the bottom line, 2 for the 1st line from bottom, -1 for the bottom ledger line, etc.
        """
        return cls((line - 1) * 2)

    @classmethod
    def from_space(cls, space: int) -> PositionInStaff:
        """
        Creates a PositionInStaff object from a space number.

        Args:
            space (int): The space number. space 1 is bottom space, 2

        Returns:
            PositionInStaff: The PositionInStaff object.
        """
        return cls((space) * 2 - 1)

    @classmethod
    def from_encoded(cls, encoded: str) -> PositionInStaff:
        """
        Creates a PositionInStaff object from an encoded string.

        Args:
            encoded (str): The encoded string.

        Returns:
            PositionInStaff: The PositionInStaff object.
        """
        if encoded.startswith(cls.LINE_CHARACTER):
            line = int(encoded[1:])  # Extract the line number
            return cls.from_line(line)
        elif encoded.startswith(cls.SPACE_CHARACTER):
            space = int(encoded[1:])  # Extract the space number
            return cls.from_space(space)
        else:
            raise ValueError(f""
                             f"Invalid encoded string: {encoded}. "
                             f"Expected to start with '{cls.LINE_CHARACTER}' or '{cls.SPACE_CHARACTER} at the beginning.")


    def line(self):
        """
        Returns the line number of the position in staff.
        """
        return self.line_space // 2 + 1


    def space(self):
        """
        Returns the space number of the position in staff.
        """
        return (self.line_space - 1) // 2 + 1


    def is_line(self) -> bool:
        """
        Returns True if the position is a line, False otherwise. If is not a line, it is a space, and vice versa.
        """
        return self.line_space % 2 == 0

    def move(self, line_space_difference: int) -> PositionInStaff:
        """
        Returns a new PositionInStaff object with the position moved by the given number of lines or spaces.

        Args:
            line_space_difference (int): The number of lines or spaces to move.

        Returns:
            PositionInStaff: The new PositionInStaff object.
        """
        return PositionInStaff(self.line_space + line_space_difference)

    def position_below(self) -> PositionInStaff:
        """
        Returns the position below the current position.
        """
        return self.move(-2)

    def position_above(self) -> PositionInStaff:
        """
        Returns the position above the current position.
        """
        return self.move(2)



    def __str__(self) -> str:
        """
        Returns the string representation of the position in staff.
        """
        if self.is_line():
            return f"{self.LINE_CHARACTER}{int(self.line())}"
        else:
            return f"{self.SPACE_CHARACTER}{int(self.space())}"

    def __repr__(self) -> str:
        """
        Returns the string representation of the PositionInStaff object.
        """
        return f"PositionInStaff(line_space={self.line_space}), {self.__str__()}"

    def __eq__(self, other) -> bool:
        """
        Compares two PositionInStaff objects.
        """
        if not isinstance(other, PositionInStaff):
            return False
        return self.line_space == other.line_space

    def __ne__(self, other) -> bool:
        """
        Compares two PositionInStaff objects.
        """
        return not self.__eq__(other)

    def __hash__(self) -> int:
        """
        Returns the hash of the PositionInStaff object.
        """
        return hash(self.line_space)

    def __lt__(self, other) -> bool:
        """
        Compares two PositionInStaff objects.
        """
        if not isinstance(other, PositionInStaff):
            return NotImplemented
        return self.line_space < other.line_space

__eq__(other)

Compares two PositionInStaff objects.

Source code in kernpy/core/gkern.py
180
181
182
183
184
185
186
def __eq__(self, other) -> bool:
    """
    Compares two PositionInStaff objects.
    """
    if not isinstance(other, PositionInStaff):
        return False
    return self.line_space == other.line_space

__hash__()

Returns the hash of the PositionInStaff object.

Source code in kernpy/core/gkern.py
194
195
196
197
198
def __hash__(self) -> int:
    """
    Returns the hash of the PositionInStaff object.
    """
    return hash(self.line_space)

__init__(line_space)

Initializes the PositionInStaff object.

Parameters:

Name Type Description Default
line_space int

0 for bottom line, -1 for space under bottom line, 1 for space above bottom line. Increments by 1 for each line or space.

required
Source code in kernpy/core/gkern.py
59
60
61
62
63
64
65
66
67
68
def __init__(self, line_space: int):
    """
    Initializes the PositionInStaff object.

    Args:
        line_space (int): 0 for bottom line, -1 for space under bottom line, 1 for space above bottom line. \
         Increments by 1 for each line or space.

    """
    self.line_space = line_space

__lt__(other)

Compares two PositionInStaff objects.

Source code in kernpy/core/gkern.py
200
201
202
203
204
205
206
def __lt__(self, other) -> bool:
    """
    Compares two PositionInStaff objects.
    """
    if not isinstance(other, PositionInStaff):
        return NotImplemented
    return self.line_space < other.line_space

__ne__(other)

Compares two PositionInStaff objects.

Source code in kernpy/core/gkern.py
188
189
190
191
192
def __ne__(self, other) -> bool:
    """
    Compares two PositionInStaff objects.
    """
    return not self.__eq__(other)

__repr__()

Returns the string representation of the PositionInStaff object.

Source code in kernpy/core/gkern.py
174
175
176
177
178
def __repr__(self) -> str:
    """
    Returns the string representation of the PositionInStaff object.
    """
    return f"PositionInStaff(line_space={self.line_space}), {self.__str__()}"

__str__()

Returns the string representation of the position in staff.

Source code in kernpy/core/gkern.py
165
166
167
168
169
170
171
172
def __str__(self) -> str:
    """
    Returns the string representation of the position in staff.
    """
    if self.is_line():
        return f"{self.LINE_CHARACTER}{int(self.line())}"
    else:
        return f"{self.SPACE_CHARACTER}{int(self.space())}"

from_encoded(encoded) classmethod

Creates a PositionInStaff object from an encoded string.

Parameters:

Name Type Description Default
encoded str

The encoded string.

required

Returns:

Name Type Description
PositionInStaff PositionInStaff

The PositionInStaff object.

Source code in kernpy/core/gkern.py
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
@classmethod
def from_encoded(cls, encoded: str) -> PositionInStaff:
    """
    Creates a PositionInStaff object from an encoded string.

    Args:
        encoded (str): The encoded string.

    Returns:
        PositionInStaff: The PositionInStaff object.
    """
    if encoded.startswith(cls.LINE_CHARACTER):
        line = int(encoded[1:])  # Extract the line number
        return cls.from_line(line)
    elif encoded.startswith(cls.SPACE_CHARACTER):
        space = int(encoded[1:])  # Extract the space number
        return cls.from_space(space)
    else:
        raise ValueError(f""
                         f"Invalid encoded string: {encoded}. "
                         f"Expected to start with '{cls.LINE_CHARACTER}' or '{cls.SPACE_CHARACTER} at the beginning.")

from_line(line) classmethod

Creates a PositionInStaff object from a line number.

Parameters:

Name Type Description Default
line int

The line number. line 1 is bottom line, 2 is the 1st line from bottom, 0 is the bottom ledger line

required

Returns:

Name Type Description
PositionInStaff PositionInStaff

The PositionInStaff object. 0 for the bottom line, 2 for the 1st line from bottom, -1 for the bottom ledger line, etc.

Source code in kernpy/core/gkern.py
70
71
72
73
74
75
76
77
78
79
80
81
@classmethod
def from_line(cls, line: int) -> PositionInStaff:
    """
    Creates a PositionInStaff object from a line number.

    Args:
        line (int): The line number. line 1 is bottom line, 2 is the 1st line from bottom, 0 is the bottom ledger line

    Returns:
        PositionInStaff: The PositionInStaff object. 0 for the bottom line, 2 for the 1st line from bottom, -1 for the bottom ledger line, etc.
    """
    return cls((line - 1) * 2)

from_space(space) classmethod

Creates a PositionInStaff object from a space number.

Parameters:

Name Type Description Default
space int

The space number. space 1 is bottom space, 2

required

Returns:

Name Type Description
PositionInStaff PositionInStaff

The PositionInStaff object.

Source code in kernpy/core/gkern.py
83
84
85
86
87
88
89
90
91
92
93
94
@classmethod
def from_space(cls, space: int) -> PositionInStaff:
    """
    Creates a PositionInStaff object from a space number.

    Args:
        space (int): The space number. space 1 is bottom space, 2

    Returns:
        PositionInStaff: The PositionInStaff object.
    """
    return cls((space) * 2 - 1)

is_line()

Returns True if the position is a line, False otherwise. If is not a line, it is a space, and vice versa.

Source code in kernpy/core/gkern.py
133
134
135
136
137
def is_line(self) -> bool:
    """
    Returns True if the position is a line, False otherwise. If is not a line, it is a space, and vice versa.
    """
    return self.line_space % 2 == 0

line()

Returns the line number of the position in staff.

Source code in kernpy/core/gkern.py
119
120
121
122
123
def line(self):
    """
    Returns the line number of the position in staff.
    """
    return self.line_space // 2 + 1

move(line_space_difference)

Returns a new PositionInStaff object with the position moved by the given number of lines or spaces.

Parameters:

Name Type Description Default
line_space_difference int

The number of lines or spaces to move.

required

Returns:

Name Type Description
PositionInStaff PositionInStaff

The new PositionInStaff object.

Source code in kernpy/core/gkern.py
139
140
141
142
143
144
145
146
147
148
149
def move(self, line_space_difference: int) -> PositionInStaff:
    """
    Returns a new PositionInStaff object with the position moved by the given number of lines or spaces.

    Args:
        line_space_difference (int): The number of lines or spaces to move.

    Returns:
        PositionInStaff: The new PositionInStaff object.
    """
    return PositionInStaff(self.line_space + line_space_difference)

position_above()

Returns the position above the current position.

Source code in kernpy/core/gkern.py
157
158
159
160
161
def position_above(self) -> PositionInStaff:
    """
    Returns the position above the current position.
    """
    return self.move(2)

position_below()

Returns the position below the current position.

Source code in kernpy/core/gkern.py
151
152
153
154
155
def position_below(self) -> PositionInStaff:
    """
    Returns the position below the current position.
    """
    return self.move(-2)

space()

Returns the space number of the position in staff.

Source code in kernpy/core/gkern.py
126
127
128
129
130
def space(self):
    """
    Returns the space number of the position in staff.
    """
    return (self.line_space - 1) // 2 + 1

RootSpineImporter

Bases: SpineImporter

Source code in kernpy/core/root_spine_importer.py
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
class RootSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        #return RootSpineListener() # TODO: Create a custom functional listener for RootSpineImporter
        return KernSpineListener()

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        kern_spine_importer = KernSpineImporter()
        token = kern_spine_importer.import_token(encoding)

        return token  # The **root spine tokens are always a subset of the **kern spine tokens

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/root_spine_importer.py
38
39
40
41
42
43
44
45
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

SignatureNodes

SignatureNodes class.

This class is used to store the last signature nodes of a tree. It is used to keep track of the last signature nodes.

Attributes: nodes (dict): A dictionary that stores the last signature nodes. This way, we can add several tokens without repetitions. - The key is the signature descendant token class (KeyToken, MeterSymbolToken, etc...) - The value = node

Source code in kernpy/core/document.py
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
class SignatureNodes:
    """
    SignatureNodes class.

    This class is used to store the last signature nodes of a tree.
    It is used to keep track of the last signature nodes.

    Attributes: nodes (dict): A dictionary that stores the last signature nodes. This way, we can add several tokens
    without repetitions. - The key is the signature descendant token class (KeyToken, MeterSymbolToken, etc...) - The
    value = node

    """

    def __init__(self):
        """
        Create an instance of SignatureNodes. Initialize the nodes as an empty dictionary.

        Examples:
            >>> signature_nodes = SignatureNodes()
            >>> signature_nodes.nodes
            {}
        """
        self.nodes = {}

    def clone(self):
        """
        Create a deep copy of the SignatureNodes instance.
        Returns: A new instance of SignatureNodes with nodes copied.

        # TODO: This method is equivalent to the following code:
        # from copy import deepcopy
        # signature_nodes_to_copy = SignatureNodes()
        # ...
        # result = deepcopy(signature_nodes_to_copy)
        # It should be tested.
        """
        result = SignatureNodes()
        result.nodes = copy(self.nodes)
        return result

    def update(self, node):
        self.nodes[node.token.__class__.__name__] = node

__init__()

Create an instance of SignatureNodes. Initialize the nodes as an empty dictionary.

Examples:

>>> signature_nodes = SignatureNodes()
>>> signature_nodes.nodes
{}
Source code in kernpy/core/document.py
31
32
33
34
35
36
37
38
39
40
def __init__(self):
    """
    Create an instance of SignatureNodes. Initialize the nodes as an empty dictionary.

    Examples:
        >>> signature_nodes = SignatureNodes()
        >>> signature_nodes.nodes
        {}
    """
    self.nodes = {}

clone()

Create a deep copy of the SignatureNodes instance. Returns: A new instance of SignatureNodes with nodes copied.

TODO: This method is equivalent to the following code:

from copy import deepcopy

signature_nodes_to_copy = SignatureNodes()

...

result = deepcopy(signature_nodes_to_copy)

It should be tested.

Source code in kernpy/core/document.py
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
def clone(self):
    """
    Create a deep copy of the SignatureNodes instance.
    Returns: A new instance of SignatureNodes with nodes copied.

    # TODO: This method is equivalent to the following code:
    # from copy import deepcopy
    # signature_nodes_to_copy = SignatureNodes()
    # ...
    # result = deepcopy(signature_nodes_to_copy)
    # It should be tested.
    """
    result = SignatureNodes()
    result.nodes = copy(self.nodes)
    return result

SignatureToken

Bases: SimpleToken

SignatureToken class for all signature tokens. It will be overridden by more specific classes.

Source code in kernpy/core/tokens.py
1654
1655
1656
1657
1658
1659
1660
class SignatureToken(SimpleToken):
    """
    SignatureToken class for all signature tokens. It will be overridden by more specific classes.
    """

    def __init__(self, encoding, category=TokenCategory.SIGNATURES):
        super().__init__(encoding, category)

SimpleToken

Bases: Token

SimpleToken class.

Source code in kernpy/core/tokens.py
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
class SimpleToken(Token):
    """
    SimpleToken class.
    """

    def __init__(self, encoding, category):
        super().__init__(encoding, category)

    def export(self, **kwargs) -> str:
        """
        Exports the token.

        Args:
            **kwargs: 'filter_categories' (Optional[Callable[[TokenCategory], bool]]): It is ignored in this class.

        Returns (str): The encoded token representation.
        """
        return self.encoding

export(**kwargs)

Exports the token.

Parameters:

Name Type Description Default
**kwargs

'filter_categories' (Optional[Callable[[TokenCategory], bool]]): It is ignored in this class.

{}

Returns (str): The encoded token representation.

Source code in kernpy/core/tokens.py
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
def export(self, **kwargs) -> str:
    """
    Exports the token.

    Args:
        **kwargs: 'filter_categories' (Optional[Callable[[TokenCategory], bool]]): It is ignored in this class.

    Returns (str): The encoded token representation.
    """
    return self.encoding

SpineImporter

Bases: ABC

Source code in kernpy/core/spine_importer.py
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
class SpineImporter(ABC):
    def __init__(self, verbose: Optional[bool] = False):
        """
        SpineImporter constructor.
        This class is an abstract base class for importing all kinds of spines.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        self.import_listener = self.import_listener()
        self.error_listener = ErrorListener(verbose=verbose)

    @abstractmethod
    def import_listener(self) -> BaseANTLRSpineParserListener:
        pass

    @abstractmethod
    def import_token(self, encoding: str) -> Token:
        pass

    @classmethod
    def _raise_error_if_wrong_input(cls, encoding: str):
        if encoding is None:
            raise ValueError("Encoding cannot be None")
        if not isinstance(encoding, str):
            raise TypeError("Encoding must be a string")
        if encoding == '':
            raise ValueError("Encoding cannot be an empty string")

__init__(verbose=False)

SpineImporter constructor. This class is an abstract base class for importing all kinds of spines.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/spine_importer.py
17
18
19
20
21
22
23
24
25
26
def __init__(self, verbose: Optional[bool] = False):
    """
    SpineImporter constructor.
    This class is an abstract base class for importing all kinds of spines.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    self.import_listener = self.import_listener()
    self.error_listener = ErrorListener(verbose=verbose)

SpineOperationToken

Bases: SimpleToken

SpineOperationToken class.

This token represents different operations in the Humdrum kern encoding. These are the available operations: - *-: spine-path terminator. - *: null interpretation. - *+: add spines. - *^: split spines. - *x: exchange spines.

Attributes:

Name Type Description
cancelled_at_stage int

The stage at which the operation was cancelled. Defaults to None.

Source code in kernpy/core/tokens.py
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
class SpineOperationToken(SimpleToken):
    """
    SpineOperationToken class.

    This token represents different operations in the Humdrum kern encoding.
    These are the available operations:
        - `*-`:  spine-path terminator.
        - `*`: null interpretation.
        - `*+`: add spines.
        - `*^`: split spines.
        - `*x`: exchange spines.

    Attributes:
        cancelled_at_stage (int): The stage at which the operation was cancelled. Defaults to None.
    """

    def __init__(self, encoding):
        super().__init__(encoding, TokenCategory.SPINE_OPERATION)
        self.cancelled_at_stage = None

    def is_cancelled_at(self, stage) -> bool:
        """
        Checks if the operation was cancelled at the given stage.

        Args:
            stage (int): The stage at which the operation was cancelled.

        Returns:
            bool: True if the operation was cancelled at the given stage, False otherwise.
        """
        if self.cancelled_at_stage is None:
            return False
        else:
            return self.cancelled_at_stage < stage

is_cancelled_at(stage)

Checks if the operation was cancelled at the given stage.

Parameters:

Name Type Description Default
stage int

The stage at which the operation was cancelled.

required

Returns:

Name Type Description
bool bool

True if the operation was cancelled at the given stage, False otherwise.

Source code in kernpy/core/tokens.py
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
def is_cancelled_at(self, stage) -> bool:
    """
    Checks if the operation was cancelled at the given stage.

    Args:
        stage (int): The stage at which the operation was cancelled.

    Returns:
        bool: True if the operation was cancelled at the given stage, False otherwise.
    """
    if self.cancelled_at_stage is None:
        return False
    else:
        return self.cancelled_at_stage < stage

Staff

Source code in kernpy/core/gkern.py
499
500
501
502
503
504
class Staff:
    def position_in_staff(self, *, clef: Clef, pitch: AgnosticPitch) -> PositionInStaff:
        """
        Returns the position in staff for the given clef and pitch.
        """
        bottom_cleff_note_name = clef.bottom_line()

position_in_staff(*, clef, pitch)

Returns the position in staff for the given clef and pitch.

Source code in kernpy/core/gkern.py
500
501
502
503
504
def position_in_staff(self, *, clef: Clef, pitch: AgnosticPitch) -> PositionInStaff:
    """
    Returns the position in staff for the given clef and pitch.
    """
    bottom_cleff_note_name = clef.bottom_line()

Subtoken

Subtoken class. Thhe subtokens are the smallest units of categories. ComplexToken objects are composed of subtokens.

Attributes:

Name Type Description
encoding

The complete unprocessed encoding

category

The subtoken category, one of SubTokenCategory

Source code in kernpy/core/tokens.py
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
class Subtoken:
    """
    Subtoken class. Thhe subtokens are the smallest units of categories. ComplexToken objects are composed of subtokens.

    Attributes:
        encoding: The complete unprocessed encoding
        category: The subtoken category, one of SubTokenCategory
    """
    DECORATION = None

    def __init__(self, encoding: str, category: TokenCategory):
        """
        Subtoken constructor

        Args:
            encoding (str): The complete unprocessed encoding
            category (TokenCategory): The subtoken category. \
                It should be a child of the main 'TokenCategory' in the hierarchy.

        """
        self.encoding = encoding
        self.category = category

    def __str__(self):
        """
        Returns the string representation of the subtoken.

        Returns (str): The string representation of the subtoken.
        """
        return self.encoding

    def __eq__(self, other):
        """
        Compare two subtokens.

        Args:
            other (Subtoken): The other subtoken to compare.
        Returns (bool): True if the subtokens are equal, False otherwise.
        """
        if not isinstance(other, Subtoken):
            return False
        return self.encoding == other.encoding and self.category == other.category

    def __ne__(self, other):
        """
        Compare two subtokens.

        Args:
            other (Subtoken): The other subtoken to compare.
        Returns (bool): True if the subtokens are different, False otherwise.
        """
        return not self.__eq__(other)

    def __hash__(self):
        """
        Returns the hash of the subtoken.

        Returns (int): The hash of the subtoken.
        """
        return hash((self.encoding, self.category))

__eq__(other)

Compare two subtokens.

Parameters:

Name Type Description Default
other Subtoken

The other subtoken to compare.

required

Returns (bool): True if the subtokens are equal, False otherwise.

Source code in kernpy/core/tokens.py
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
def __eq__(self, other):
    """
    Compare two subtokens.

    Args:
        other (Subtoken): The other subtoken to compare.
    Returns (bool): True if the subtokens are equal, False otherwise.
    """
    if not isinstance(other, Subtoken):
        return False
    return self.encoding == other.encoding and self.category == other.category

__hash__()

Returns the hash of the subtoken.

Returns (int): The hash of the subtoken.

Source code in kernpy/core/tokens.py
1376
1377
1378
1379
1380
1381
1382
def __hash__(self):
    """
    Returns the hash of the subtoken.

    Returns (int): The hash of the subtoken.
    """
    return hash((self.encoding, self.category))

__init__(encoding, category)

Subtoken constructor

Parameters:

Name Type Description Default
encoding str

The complete unprocessed encoding

required
category TokenCategory

The subtoken category. It should be a child of the main 'TokenCategory' in the hierarchy.

required
Source code in kernpy/core/tokens.py
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
def __init__(self, encoding: str, category: TokenCategory):
    """
    Subtoken constructor

    Args:
        encoding (str): The complete unprocessed encoding
        category (TokenCategory): The subtoken category. \
            It should be a child of the main 'TokenCategory' in the hierarchy.

    """
    self.encoding = encoding
    self.category = category

__ne__(other)

Compare two subtokens.

Parameters:

Name Type Description Default
other Subtoken

The other subtoken to compare.

required

Returns (bool): True if the subtokens are different, False otherwise.

Source code in kernpy/core/tokens.py
1366
1367
1368
1369
1370
1371
1372
1373
1374
def __ne__(self, other):
    """
    Compare two subtokens.

    Args:
        other (Subtoken): The other subtoken to compare.
    Returns (bool): True if the subtokens are different, False otherwise.
    """
    return not self.__eq__(other)

__str__()

Returns the string representation of the subtoken.

Returns (str): The string representation of the subtoken.

Source code in kernpy/core/tokens.py
1346
1347
1348
1349
1350
1351
1352
def __str__(self):
    """
    Returns the string representation of the subtoken.

    Returns (str): The string representation of the subtoken.
    """
    return self.encoding

TextSpineImporter

Bases: SpineImporter

Source code in kernpy/core/text_spine_importer.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
class TextSpineImporter(SpineImporter):
    def __init__(self, verbose: Optional[bool] = False):
        """
        KernSpineImporter constructor.

        Args:
            verbose (Optional[bool]): Level of verbosity for error messages.
        """
        super().__init__(verbose=verbose)

    def import_listener(self) -> BaseANTLRSpineParserListener:
        return KernSpineListener()  # TODO: Create a custom functional listener for TextSpineImporter

    def import_token(self, encoding: str) -> Token:
        self._raise_error_if_wrong_input(encoding)

        try:
            kern_spine_importer = KernSpineImporter()
            token = kern_spine_importer.import_token(encoding)
        except Exception as e:
            return SimpleToken(encoding, TokenCategory.LYRICS)

        ACCEPTED_CATEGORIES = {
            TokenCategory.STRUCTURAL,
            TokenCategory.SIGNATURES,
            TokenCategory.EMPTY,
            TokenCategory.BARLINES,
            TokenCategory.IMAGE_ANNOTATIONS,
            TokenCategory.BARLINES,
            TokenCategory.COMMENTS,
        }

        if not any(TokenCategory.is_child(child=token.category, parent=cat) for cat in ACCEPTED_CATEGORIES):
            return SimpleToken(encoding, TokenCategory.LYRICS)

        return token

__init__(verbose=False)

KernSpineImporter constructor.

Parameters:

Name Type Description Default
verbose Optional[bool]

Level of verbosity for error messages.

False
Source code in kernpy/core/text_spine_importer.py
12
13
14
15
16
17
18
19
def __init__(self, verbose: Optional[bool] = False):
    """
    KernSpineImporter constructor.

    Args:
        verbose (Optional[bool]): Level of verbosity for error messages.
    """
    super().__init__(verbose=verbose)

TimeSignatureToken

Bases: SignatureToken

TimeSignatureToken class.

Source code in kernpy/core/tokens.py
1672
1673
1674
1675
1676
1677
1678
class TimeSignatureToken(SignatureToken):
    """
    TimeSignatureToken class.
    """

    def __init__(self, encoding):
        super().__init__(encoding, TokenCategory.TIME_SIGNATURE)

Token

Bases: AbstractToken, ABC

Abstract Token class.

Source code in kernpy/core/tokens.py
1471
1472
1473
1474
1475
1476
1477
class Token(AbstractToken, ABC):
    """
    Abstract Token class.
    """

    def __init__(self, encoding, category):
        super().__init__(encoding, category)

TokenCategory

Bases: Enum

Options for the category of a token.

This is used to determine what kind of token should be exported.

The categories are sorted the specific order they are compared to sorthem. But hierarchical order must be defined in other data structures.

Source code in kernpy/core/tokens.py
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
class TokenCategory(Enum):
    """
    Options for the category of a token.

    This is used to determine what kind of token should be exported.

    The categories are sorted the specific order they are compared to sorthem. But hierarchical order must be defined in other data structures.
    """
    STRUCTURAL = auto()  # header, spine operations
    HEADER = auto()  # **kern, **mens, **text, **harm, **mxhm, **root, **dyn, **dynam, **fing
    SPINE_OPERATION = auto()
    CORE = auto() # notes, rests, chords, etc.
    ERROR = auto()
    NOTE_REST = auto()
    NOTE = auto()
    DURATION = auto()
    PITCH = auto()
    ALTERATION = auto()
    DECORATION = auto()
    REST = auto()
    CHORD = auto()
    EMPTY = auto()  # placeholders, null interpretation
    SIGNATURES = auto()
    CLEF = auto()
    TIME_SIGNATURE = auto()
    METER_SYMBOL = auto()
    KEY_SIGNATURE = auto()
    KEY_TOKEN = auto()
    ENGRAVED_SYMBOLS = auto()
    OTHER_CONTEXTUAL = auto()
    BARLINES = auto()
    COMMENTS = auto()
    FIELD_COMMENTS = auto()
    LINE_COMMENTS = auto()
    DYNAMICS = auto()
    HARMONY = auto()
    FINGERING = auto()
    LYRICS = auto()
    INSTRUMENTS = auto()
    IMAGE_ANNOTATIONS = auto()
    BOUNDING_BOXES = auto()
    LINE_BREAK = auto()
    OTHER = auto()
    MHXM = auto()
    ROOT = auto()

    def __lt__(self, other):
        """
        Compare two TokenCategory.
        Args:
            other (TokenCategory): The other category to compare.

        Returns (bool): True if this category is lower than the other, False otherwise.

        Examples:
            >>> TokenCategory.STRUCTURAL < TokenCategory.CORE
            True
            >>> TokenCategory.STRUCTURAL < TokenCategory.STRUCTURAL
            False
            >>> TokenCategory.CORE < TokenCategory.STRUCTURAL
            False
            >>> sorted([TokenCategory.STRUCTURAL, TokenCategory.CORE])
            [TokenCategory.STRUCTURAL, TokenCategory.CORE]
        """
        if isinstance(other, TokenCategory):
            return self.value < other.value
        return NotImplemented

    @classmethod
    def all(cls) -> Set[TokenCategory]:
        f"""
        Get all categories in the hierarchy.

        Returns:
            Set[TokenCategory]: The set of all categories in the hierarchy.

        Examples:
            >>> import kernpy as kp
            >>> kp.TokenCategory.all()
            set([<TokenCategory.MHXM: 29>, <TokenCategory.COMMENTS: 19>, <TokenCategory.BARLINES: 18>, <TokenCategory.CORE: 2>, <TokenCategory.BOUNDING_BOXES: 27>, <TokenCategory.NOTE_REST: 3>, <TokenCategory.NOTE: 4>, <TokenCategory.ENGRAVED_SYMBOLS: 16>, <TokenCategory.SIGNATURES: 11>, <TokenCategory.REST: 8>, <TokenCategory.METER_SYMBOL: 14>, <TokenCategory.HARMONY: 23>, <TokenCategory.KEY_SIGNATURE: 15>, <TokenCategory.EMPTY: 10>, <TokenCategory.PITCH: 6>, <TokenCategory.LINE_COMMENTS: 21>, <TokenCategory.FINGERING: 24>, <TokenCategory.DECORATION: 7>, <TokenCategory.OTHER: 28>, <TokenCategory.INSTRUMENTS: 26>, <TokenCategory.STRUCTURAL: 1>, <TokenCategory.FIELD_COMMENTS: 20>, <TokenCategory.LYRICS: 25>, <TokenCategory.CLEF: 12>, <TokenCategory.DURATION: 5>, <TokenCategory.DYNAMICS: 22>, <TokenCategory.CHORD: 9>, <TokenCategory.TIME_SIGNATURE: 13>, <TokenCategory.OTHER_CONTEXTUAL: 17>])
        """
        return set([t for t in TokenCategory])

    @classmethod
    def tree(cls):
        """
        Return a string representation of the category hierarchy
        Returns (str): The string representation of the category hierarchy

        Examples:
            >>> import kernpy as kp
            >>> print(kp.TokenCategory.tree())
            .
            ├── TokenCategory.STRUCTURAL
            ├── TokenCategory.CORE
            │   ├── TokenCategory.NOTE_REST
            │   │   ├── TokenCategory.DURATION
            │   │   ├── TokenCategory.NOTE
            │   │   │   ├── TokenCategory.PITCH
            │   │   │   └── TokenCategory.DECORATION
            │   │   └── TokenCategory.REST
            │   ├── TokenCategory.CHORD
            │   └── TokenCategory.EMPTY
            ├── TokenCategory.SIGNATURES
            │   ├── TokenCategory.CLEF
            │   ├── TokenCategory.TIME_SIGNATURE
            │   ├── TokenCategory.METER_SYMBOL
            │   └── TokenCategory.KEY_SIGNATURE
            ├── TokenCategory.ENGRAVED_SYMBOLS
            ├── TokenCategory.OTHER_CONTEXTUAL
            ├── TokenCategory.BARLINES
            ├── TokenCategory.COMMENTS
            │   ├── TokenCategory.FIELD_COMMENTS
            │   └── TokenCategory.LINE_COMMENTS
            ├── TokenCategory.DYNAMICS
            ├── TokenCategory.HARMONY
            ├── TokenCategory.FINGERING
            ├── TokenCategory.LYRICS
            ├── TokenCategory.INSTRUMENTS
            ├── TokenCategory.BOUNDING_BOXES
            └── TokenCategory.OTHER
        """
        return TokenCategoryHierarchyMapper.tree()

    @classmethod
    def is_child(cls, *, child: TokenCategory, parent: TokenCategory) -> bool:
        """
        Check if the child category is a child of the parent category.

        Args:
            child (TokenCategory): The child category.
            parent (TokenCategory): The parent category.

        Returns (bool): True if the child category is a child of the parent category, False otherwise.
        """
        return TokenCategoryHierarchyMapper.is_child(parent=parent, child=child)

    @classmethod
    def children(cls, target: TokenCategory) -> Set[TokenCategory]:
        """
        Get the children of the target category.

        Args:
            target (TokenCategory): The target category.

        Returns (List[TokenCategory]): The list of child categories of the target category.
        """
        return TokenCategoryHierarchyMapper.children(parent=target)

    @classmethod
    def valid(cls, *, include: Optional[Set[TokenCategory]] = None, exclude: Optional[Set[TokenCategory]] = None) -> Set[TokenCategory]:
        """
        Get the valid categories based on the include and exclude sets.

        Args:
            include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
                If None, all categories are included.
            exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
                If None, no categories are excluded.

        Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.
        """
        return TokenCategoryHierarchyMapper.valid(include=include, exclude=exclude)

    @classmethod
    def leaves(cls, target: TokenCategory) -> Set[TokenCategory]:
        """
        Get the leaves of the subtree of the target category.

        Args:
            target (TokenCategory): The target category.

        Returns (List[TokenCategory]): The list of leaf categories of the target category.
        """
        return TokenCategoryHierarchyMapper.leaves(target=target)

    @classmethod
    def nodes(cls, target: TokenCategory) -> Set[TokenCategory]:
        """
        Get the nodes of the subtree of the target category.

        Args:
            target (TokenCategory): The target category.

        Returns (List[TokenCategory]): The list of node categories of the target category.
        """
        return TokenCategoryHierarchyMapper.nodes(parent=target)

    @classmethod
    def match(cls,
              target: TokenCategory, *,
              include: Optional[Set[TokenCategory]] = None,
              exclude: Optional[Set[TokenCategory]] = None) -> bool:
        """
        Check if the target category matches the include and exclude sets.

        Args:
            target (TokenCategory): The target category.
            include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
                If None, all categories are included.
            exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
                If None, no categories are excluded.

        Returns (bool): True if the target category matches the include and exclude sets, False otherwise.
        """
        return TokenCategoryHierarchyMapper.match(category=target, include=include, exclude=exclude)

    def __str__(self):
        """
        Get the string representation of the category.

        Returns (str): The string representation of the category.
        """
        return self.name

__lt__(other)

Compare two TokenCategory. Args: other (TokenCategory): The other category to compare.

Returns (bool): True if this category is lower than the other, False otherwise.

Examples:

>>> TokenCategory.STRUCTURAL < TokenCategory.CORE
True
>>> TokenCategory.STRUCTURAL < TokenCategory.STRUCTURAL
False
>>> TokenCategory.CORE < TokenCategory.STRUCTURAL
False
>>> sorted([TokenCategory.STRUCTURAL, TokenCategory.CORE])
[TokenCategory.STRUCTURAL, TokenCategory.CORE]
Source code in kernpy/core/tokens.py
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __lt__(self, other):
    """
    Compare two TokenCategory.
    Args:
        other (TokenCategory): The other category to compare.

    Returns (bool): True if this category is lower than the other, False otherwise.

    Examples:
        >>> TokenCategory.STRUCTURAL < TokenCategory.CORE
        True
        >>> TokenCategory.STRUCTURAL < TokenCategory.STRUCTURAL
        False
        >>> TokenCategory.CORE < TokenCategory.STRUCTURAL
        False
        >>> sorted([TokenCategory.STRUCTURAL, TokenCategory.CORE])
        [TokenCategory.STRUCTURAL, TokenCategory.CORE]
    """
    if isinstance(other, TokenCategory):
        return self.value < other.value
    return NotImplemented

__str__()

Get the string representation of the category.

Returns (str): The string representation of the category.

Source code in kernpy/core/tokens.py
230
231
232
233
234
235
236
def __str__(self):
    """
    Get the string representation of the category.

    Returns (str): The string representation of the category.
    """
    return self.name

children(target) classmethod

Get the children of the target category.

Parameters:

Name Type Description Default
target TokenCategory

The target category.

required

Returns (List[TokenCategory]): The list of child categories of the target category.

Source code in kernpy/core/tokens.py
160
161
162
163
164
165
166
167
168
169
170
@classmethod
def children(cls, target: TokenCategory) -> Set[TokenCategory]:
    """
    Get the children of the target category.

    Args:
        target (TokenCategory): The target category.

    Returns (List[TokenCategory]): The list of child categories of the target category.
    """
    return TokenCategoryHierarchyMapper.children(parent=target)

is_child(*, child, parent) classmethod

Check if the child category is a child of the parent category.

Parameters:

Name Type Description Default
child TokenCategory

The child category.

required
parent TokenCategory

The parent category.

required

Returns (bool): True if the child category is a child of the parent category, False otherwise.

Source code in kernpy/core/tokens.py
147
148
149
150
151
152
153
154
155
156
157
158
@classmethod
def is_child(cls, *, child: TokenCategory, parent: TokenCategory) -> bool:
    """
    Check if the child category is a child of the parent category.

    Args:
        child (TokenCategory): The child category.
        parent (TokenCategory): The parent category.

    Returns (bool): True if the child category is a child of the parent category, False otherwise.
    """
    return TokenCategoryHierarchyMapper.is_child(parent=parent, child=child)

leaves(target) classmethod

Get the leaves of the subtree of the target category.

Parameters:

Name Type Description Default
target TokenCategory

The target category.

required

Returns (List[TokenCategory]): The list of leaf categories of the target category.

Source code in kernpy/core/tokens.py
187
188
189
190
191
192
193
194
195
196
197
@classmethod
def leaves(cls, target: TokenCategory) -> Set[TokenCategory]:
    """
    Get the leaves of the subtree of the target category.

    Args:
        target (TokenCategory): The target category.

    Returns (List[TokenCategory]): The list of leaf categories of the target category.
    """
    return TokenCategoryHierarchyMapper.leaves(target=target)

match(target, *, include=None, exclude=None) classmethod

Check if the target category matches the include and exclude sets.

Parameters:

Name Type Description Default
target TokenCategory

The target category.

required
include Optional[Set[TokenCategory]]

The set of categories to include. Defaults to None. If None, all categories are included.

None
exclude Optional[Set[TokenCategory]]

The set of categories to exclude. Defaults to None. If None, no categories are excluded.

None

Returns (bool): True if the target category matches the include and exclude sets, False otherwise.

Source code in kernpy/core/tokens.py
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
@classmethod
def match(cls,
          target: TokenCategory, *,
          include: Optional[Set[TokenCategory]] = None,
          exclude: Optional[Set[TokenCategory]] = None) -> bool:
    """
    Check if the target category matches the include and exclude sets.

    Args:
        target (TokenCategory): The target category.
        include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
            If None, all categories are included.
        exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
            If None, no categories are excluded.

    Returns (bool): True if the target category matches the include and exclude sets, False otherwise.
    """
    return TokenCategoryHierarchyMapper.match(category=target, include=include, exclude=exclude)

nodes(target) classmethod

Get the nodes of the subtree of the target category.

Parameters:

Name Type Description Default
target TokenCategory

The target category.

required

Returns (List[TokenCategory]): The list of node categories of the target category.

Source code in kernpy/core/tokens.py
199
200
201
202
203
204
205
206
207
208
209
@classmethod
def nodes(cls, target: TokenCategory) -> Set[TokenCategory]:
    """
    Get the nodes of the subtree of the target category.

    Args:
        target (TokenCategory): The target category.

    Returns (List[TokenCategory]): The list of node categories of the target category.
    """
    return TokenCategoryHierarchyMapper.nodes(parent=target)

tree() classmethod

Return a string representation of the category hierarchy Returns (str): The string representation of the category hierarchy

Examples:

>>> import kernpy as kp
>>> print(kp.TokenCategory.tree())
.
├── TokenCategory.STRUCTURAL
├── TokenCategory.CORE
│   ├── TokenCategory.NOTE_REST
│   │   ├── TokenCategory.DURATION
│   │   ├── TokenCategory.NOTE
│   │   │   ├── TokenCategory.PITCH
│   │   │   └── TokenCategory.DECORATION
│   │   └── TokenCategory.REST
│   ├── TokenCategory.CHORD
│   └── TokenCategory.EMPTY
├── TokenCategory.SIGNATURES
│   ├── TokenCategory.CLEF
│   ├── TokenCategory.TIME_SIGNATURE
│   ├── TokenCategory.METER_SYMBOL
│   └── TokenCategory.KEY_SIGNATURE
├── TokenCategory.ENGRAVED_SYMBOLS
├── TokenCategory.OTHER_CONTEXTUAL
├── TokenCategory.BARLINES
├── TokenCategory.COMMENTS
│   ├── TokenCategory.FIELD_COMMENTS
│   └── TokenCategory.LINE_COMMENTS
├── TokenCategory.DYNAMICS
├── TokenCategory.HARMONY
├── TokenCategory.FINGERING
├── TokenCategory.LYRICS
├── TokenCategory.INSTRUMENTS
├── TokenCategory.BOUNDING_BOXES
└── TokenCategory.OTHER
Source code in kernpy/core/tokens.py
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
@classmethod
def tree(cls):
    """
    Return a string representation of the category hierarchy
    Returns (str): The string representation of the category hierarchy

    Examples:
        >>> import kernpy as kp
        >>> print(kp.TokenCategory.tree())
        .
        ├── TokenCategory.STRUCTURAL
        ├── TokenCategory.CORE
        │   ├── TokenCategory.NOTE_REST
        │   │   ├── TokenCategory.DURATION
        │   │   ├── TokenCategory.NOTE
        │   │   │   ├── TokenCategory.PITCH
        │   │   │   └── TokenCategory.DECORATION
        │   │   └── TokenCategory.REST
        │   ├── TokenCategory.CHORD
        │   └── TokenCategory.EMPTY
        ├── TokenCategory.SIGNATURES
        │   ├── TokenCategory.CLEF
        │   ├── TokenCategory.TIME_SIGNATURE
        │   ├── TokenCategory.METER_SYMBOL
        │   └── TokenCategory.KEY_SIGNATURE
        ├── TokenCategory.ENGRAVED_SYMBOLS
        ├── TokenCategory.OTHER_CONTEXTUAL
        ├── TokenCategory.BARLINES
        ├── TokenCategory.COMMENTS
        │   ├── TokenCategory.FIELD_COMMENTS
        │   └── TokenCategory.LINE_COMMENTS
        ├── TokenCategory.DYNAMICS
        ├── TokenCategory.HARMONY
        ├── TokenCategory.FINGERING
        ├── TokenCategory.LYRICS
        ├── TokenCategory.INSTRUMENTS
        ├── TokenCategory.BOUNDING_BOXES
        └── TokenCategory.OTHER
    """
    return TokenCategoryHierarchyMapper.tree()

valid(*, include=None, exclude=None) classmethod

Get the valid categories based on the include and exclude sets.

Parameters:

Name Type Description Default
include Optional[Set[TokenCategory]]

The set of categories to include. Defaults to None. If None, all categories are included.

None
exclude Optional[Set[TokenCategory]]

The set of categories to exclude. Defaults to None. If None, no categories are excluded.

None

Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.

Source code in kernpy/core/tokens.py
172
173
174
175
176
177
178
179
180
181
182
183
184
185
@classmethod
def valid(cls, *, include: Optional[Set[TokenCategory]] = None, exclude: Optional[Set[TokenCategory]] = None) -> Set[TokenCategory]:
    """
    Get the valid categories based on the include and exclude sets.

    Args:
        include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
            If None, all categories are included.
        exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
            If None, no categories are excluded.

    Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.
    """
    return TokenCategoryHierarchyMapper.valid(include=include, exclude=exclude)

TokenCategoryHierarchyMapper

Mapping of the TokenCategory hierarchy.

This class is used to define the hierarchy of the TokenCategory. Useful related methods are provided.

Source code in kernpy/core/tokens.py
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
class TokenCategoryHierarchyMapper:
    """
    Mapping of the TokenCategory hierarchy.

    This class is used to define the hierarchy of the TokenCategory. Useful related methods are provided.
    """
    """
    The hierarchy of the TokenCategory is a recursive dictionary that defines the parent-child relationships \
        between the categories. It's a tree.
    """
    _hierarchy_typing = Dict[TokenCategory, '_hierarchy_typing']
    hierarchy: _hierarchy_typing = {
        TokenCategory.STRUCTURAL: {
            TokenCategory.HEADER: {},  # each leave must be an empty dictionary
            TokenCategory.SPINE_OPERATION: {},
        },
        TokenCategory.CORE: {
            TokenCategory.NOTE_REST: {
                TokenCategory.DURATION: {},
                TokenCategory.NOTE: {
                    TokenCategory.PITCH: {},
                    TokenCategory.DECORATION: {},
                    TokenCategory.ALTERATION: {},
                },
                TokenCategory.REST: {},
            },
            TokenCategory.CHORD: {},
            TokenCategory.EMPTY: {},
            TokenCategory.ERROR: {},
        },
        TokenCategory.SIGNATURES: {
            TokenCategory.CLEF: {},
            TokenCategory.TIME_SIGNATURE: {},
            TokenCategory.METER_SYMBOL: {},
            TokenCategory.KEY_SIGNATURE: {},
            TokenCategory.KEY_TOKEN: {},
        },
        TokenCategory.ENGRAVED_SYMBOLS: {},
        TokenCategory.OTHER_CONTEXTUAL: {},
        TokenCategory.BARLINES: {},
        TokenCategory.COMMENTS: {
            TokenCategory.FIELD_COMMENTS: {},
            TokenCategory.LINE_COMMENTS: {},
        },
        TokenCategory.DYNAMICS: {},
        TokenCategory.HARMONY: {},
        TokenCategory.FINGERING: {},
        TokenCategory.LYRICS: {},
        TokenCategory.INSTRUMENTS: {},
        TokenCategory.IMAGE_ANNOTATIONS: {
            TokenCategory.BOUNDING_BOXES: {},
            TokenCategory.LINE_BREAK: {},
        },
        TokenCategory.OTHER: {},
        TokenCategory.MHXM: {},
        TokenCategory.ROOT: {},
    }

    @classmethod
    def _is_child(cls, parent: TokenCategory, child: TokenCategory, *, tree: '_hierarchy_typing') -> bool:
        """
        Recursively check if `child` is in the subtree of `parent`.

        Args:
            parent (TokenCategory): The parent category.
            child (TokenCategory): The category to check.
            tree (_hierarchy_typing): The subtree to check.

        Returns:
            bool: True if `child` is a descendant of `parent`, False otherwise.
        """
        # Base case: the parent is empty.
        if len(tree.keys()) == 0:
            return False

        # Recursive case: explore the direct children of the parent.
        return any(
            direct_child == child or cls._is_child(direct_child, child, tree=tree[parent])
            for direct_child in tree.get(parent, {})
        )
        # Vectorized version of the following code:
        #direct_children = tree.get(parent, dict())
        #for direct_child in direct_children.keys():
        #    if direct_child == child or cls._is_child(direct_child, child, tree=tree[parent]):
        #        return True

    @classmethod
    def is_child(cls, parent: TokenCategory, child: TokenCategory) -> bool:
        """
        Recursively check if `child` is in the subtree of `parent`. If `parent` is the same as `child`, return True.

        Args:
            parent (TokenCategory): The parent category.
            child (TokenCategory): The category to check.

        Returns:
            bool: True if `child` is a descendant of `parent`, False otherwise.
        """
        if parent == child:
            return True
        return cls._is_child(parent, child, tree=cls.hierarchy)

    @classmethod
    def children(cls, parent: TokenCategory) -> Set[TokenCategory]:
        """
        Get the direct children of the parent category.

        Args:
            parent (TokenCategory): The parent category.

        Returns:
            Set[TokenCategory]: The list of children categories of the parent category.
        """
        return set(cls.hierarchy.get(parent, {}).keys())

    @classmethod
    def _nodes(cls, tree: _hierarchy_typing) -> Set[TokenCategory]:
        """
        Recursively get all nodes in the given hierarchy tree.
        """
        nodes = set(tree.keys())
        for child in tree.values():
            nodes.update(cls._nodes(child))
        return nodes

    @classmethod
    def _find_subtree(cls, tree: '_hierarchy_typing', parent: TokenCategory) -> Optional['_hierarchy_typing']:
        """
        Recursively find the subtree for the given parent category.
        """
        if parent in tree:
            return tree[parent]  # Return subtree if parent is found at this level
        for child, sub_tree in tree.items():
            result = cls._find_subtree(sub_tree, parent)
            if result is not None:
                return result
        return None  # Return None if parent is not found. It won't happer never


    @classmethod
    def nodes(cls, parent: TokenCategory) -> Set[TokenCategory]:
        """
        Get the all nodes of the subtree of the parent category.

        Args:
            parent (TokenCategory): The parent category.

        Returns:
            List[TokenCategory]: The list of nodes of the subtree of the parent category.
        """
        subtree = cls._find_subtree(cls.hierarchy, parent)
        return cls._nodes(subtree) if subtree is not None else set()

    @classmethod
    def valid(cls,
              include: Optional[Set[TokenCategory]] = None,
              exclude: Optional[Set[TokenCategory]] = None) -> Set[TokenCategory]:
        """
        Get the valid categories based on the include and exclude sets.

        Args:
            include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
                If None, all categories are included.
            exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
                If None, no categories are excluded.

        Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.
        """
        include = cls._validate_include(include)
        exclude = cls._validate_exclude(exclude)

        included_nodes = set.union(*[(cls.nodes(cat) | {cat}) for cat in include]) if len(include) > 0 else include
        excluded_nodes = set.union(*[(cls.nodes(cat) | {cat}) for cat in exclude]) if len(exclude) > 0 else exclude
        return included_nodes - excluded_nodes

    @classmethod
    def _leaves(cls, tree: '_hierarchy_typing') -> Set[TokenCategory]:
        """
        Recursively get all leaves (nodes without children) in the hierarchy tree.
        """
        if not tree:
            return set()
        leaves = {node for node, children in tree.items() if not children}
        for node, children in tree.items():
            leaves.update(cls._leaves(children))
        return leaves

    @classmethod
    def leaves(cls, target: TokenCategory) -> Set[TokenCategory]:
        """
        Get the leaves of the subtree of the target category.

        Args:
            target (TokenCategory): The target category.

        Returns (List[TokenCategory]): The list of leaf categories of the target category.
        """
        tree = cls._find_subtree(cls.hierarchy, target)
        return cls._leaves(tree)


    @classmethod
    def _match(cls, category: TokenCategory, *,
               include: Set[TokenCategory],
               exclude: Set[TokenCategory]) -> bool:
        """
        Check if a category matches include/exclude criteria.
        """
        # Include the category itself along with its descendants.
        target_nodes = cls.nodes(category) | {category}

        valid_categories = cls.valid(include=include, exclude=exclude)

        # Check if any node in the target set is in the valid categories.
        return len(target_nodes & valid_categories) > 0

    @classmethod
    def _validate_include(cls, include: Optional[Set[TokenCategory]]) -> Set[TokenCategory]:
        """
        Validate the include set.
        """
        if include is None:
            return cls.all()
        if isinstance(include, (list, tuple)):
            include = set(include)
        elif not isinstance(include, set):
            include = {include}
        if not all(isinstance(cat, TokenCategory) for cat in include):
            raise ValueError('Invalid category: include and exclude must be a set of TokenCategory.')
        return include

    @classmethod
    def _validate_exclude(cls, exclude: Optional[Set[TokenCategory]]) -> Set[TokenCategory]:
        """
        Validate the exclude set.
        """
        if exclude is None:
            return set()
        if isinstance(exclude, (list, tuple)):
            exclude = set(exclude)
        elif not isinstance(exclude, set):
            exclude = {exclude}
        if not all(isinstance(cat, TokenCategory) for cat in exclude):
            raise ValueError(f'Invalid category: category must be a {TokenCategory.__name__}.')
        return exclude


    @classmethod
    def match(cls, category: TokenCategory, *,
              include: Optional[Set[TokenCategory]] = None,
              exclude: Optional[Set[TokenCategory]] = None) -> bool:
        """
        Check if the category matches the include and exclude sets.
            If include is None, all categories are included. \
            If exclude is None, no categories are excluded.

        Args:
            category (TokenCategory): The category to check.
            include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
                If None, all categories are included.
            exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
                If None, no categories are excluded.

        Returns (bool): True if the category matches the include and exclude sets, False otherwise.

        Examples:
            >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST})
            True
            >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.REST})
            True
            >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.NOTE})
            False
            >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
            True
            >>> TokenCategoryHierarchyMapper.match(TokenCategory.DURATION, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
            False
        """
        include = cls._validate_include(include)
        exclude = cls._validate_exclude(exclude)

        return cls._match(category, include=include, exclude=exclude)

    @classmethod
    def all(cls) -> Set[TokenCategory]:
        """
        Get all categories in the hierarchy.

        Returns:
            Set[TokenCategory]: The set of all categories in the hierarchy.
        """
        return cls._nodes(cls.hierarchy)

    @classmethod
    def tree(cls) -> str:
        """
        Return a string representation of the category hierarchy,
        formatted similar to the output of the Unix 'tree' command.

        Example output:
            .
            ├── STRUCTURAL
            ├── CORE
            │   ├── NOTE_REST
            │   │   ├── DURATION
            │   │   ├── NOTE
            │   │   │   ├── PITCH
            │   │   │   └── DECORATION
            │   │   └── REST
            │   ├── CHORD
            │   └── EMPTY
            ├── SIGNATURES
            │   ├── CLEF
            │   ├── TIME_SIGNATURE
            │   ├── METER_SYMBOL
            │   └── KEY_SIGNATURE
            ├── ENGRAVED_SYMBOLS
            ├── OTHER_CONTEXTUAL
            ├── BARLINES
            ├── COMMENTS
            │   ├── FIELD_COMMENTS
            │   └── LINE_COMMENTS
            ├── DYNAMICS
            ├── HARMONY
            ...
        """
        def build_tree(tree: Dict[TokenCategory, '_hierarchy_typing'], prefix: str = "") -> [str]:
            lines_buffer = []
            items = list(tree.items())
            count = len(items)
            for index, (category, subtree) in enumerate(items):
                connector = "└── " if index == count - 1 else "├── "
                lines_buffer.append(prefix + connector + str(category))
                extension = "    " if index == count - 1 else "│   "
                lines_buffer.extend(build_tree(subtree, prefix + extension))
            return lines_buffer

        lines = ["."]
        lines.extend(build_tree(cls.hierarchy))
        return "\n".join(lines)

all() classmethod

Get all categories in the hierarchy.

Returns:

Type Description
Set[TokenCategory]

Set[TokenCategory]: The set of all categories in the hierarchy.

Source code in kernpy/core/tokens.py
539
540
541
542
543
544
545
546
547
@classmethod
def all(cls) -> Set[TokenCategory]:
    """
    Get all categories in the hierarchy.

    Returns:
        Set[TokenCategory]: The set of all categories in the hierarchy.
    """
    return cls._nodes(cls.hierarchy)

children(parent) classmethod

Get the direct children of the parent category.

Parameters:

Name Type Description Default
parent TokenCategory

The parent category.

required

Returns:

Type Description
Set[TokenCategory]

Set[TokenCategory]: The list of children categories of the parent category.

Source code in kernpy/core/tokens.py
359
360
361
362
363
364
365
366
367
368
369
370
@classmethod
def children(cls, parent: TokenCategory) -> Set[TokenCategory]:
    """
    Get the direct children of the parent category.

    Args:
        parent (TokenCategory): The parent category.

    Returns:
        Set[TokenCategory]: The list of children categories of the parent category.
    """
    return set(cls.hierarchy.get(parent, {}).keys())

is_child(parent, child) classmethod

Recursively check if child is in the subtree of parent. If parent is the same as child, return True.

Parameters:

Name Type Description Default
parent TokenCategory

The parent category.

required
child TokenCategory

The category to check.

required

Returns:

Name Type Description
bool bool

True if child is a descendant of parent, False otherwise.

Source code in kernpy/core/tokens.py
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
@classmethod
def is_child(cls, parent: TokenCategory, child: TokenCategory) -> bool:
    """
    Recursively check if `child` is in the subtree of `parent`. If `parent` is the same as `child`, return True.

    Args:
        parent (TokenCategory): The parent category.
        child (TokenCategory): The category to check.

    Returns:
        bool: True if `child` is a descendant of `parent`, False otherwise.
    """
    if parent == child:
        return True
    return cls._is_child(parent, child, tree=cls.hierarchy)

leaves(target) classmethod

Get the leaves of the subtree of the target category.

Parameters:

Name Type Description Default
target TokenCategory

The target category.

required

Returns (List[TokenCategory]): The list of leaf categories of the target category.

Source code in kernpy/core/tokens.py
444
445
446
447
448
449
450
451
452
453
454
455
@classmethod
def leaves(cls, target: TokenCategory) -> Set[TokenCategory]:
    """
    Get the leaves of the subtree of the target category.

    Args:
        target (TokenCategory): The target category.

    Returns (List[TokenCategory]): The list of leaf categories of the target category.
    """
    tree = cls._find_subtree(cls.hierarchy, target)
    return cls._leaves(tree)

match(category, *, include=None, exclude=None) classmethod

Check if the category matches the include and exclude sets. If include is None, all categories are included. If exclude is None, no categories are excluded.

Parameters:

Name Type Description Default
category TokenCategory

The category to check.

required
include Optional[Set[TokenCategory]]

The set of categories to include. Defaults to None. If None, all categories are included.

None
exclude Optional[Set[TokenCategory]]

The set of categories to exclude. Defaults to None. If None, no categories are excluded.

None

Returns (bool): True if the category matches the include and exclude sets, False otherwise.

Examples:

>>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST})
True
>>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.REST})
True
>>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.NOTE})
False
>>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
True
>>> TokenCategoryHierarchyMapper.match(TokenCategory.DURATION, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
False
Source code in kernpy/core/tokens.py
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
@classmethod
def match(cls, category: TokenCategory, *,
          include: Optional[Set[TokenCategory]] = None,
          exclude: Optional[Set[TokenCategory]] = None) -> bool:
    """
    Check if the category matches the include and exclude sets.
        If include is None, all categories are included. \
        If exclude is None, no categories are excluded.

    Args:
        category (TokenCategory): The category to check.
        include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
            If None, all categories are included.
        exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
            If None, no categories are excluded.

    Returns (bool): True if the category matches the include and exclude sets, False otherwise.

    Examples:
        >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST})
        True
        >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.REST})
        True
        >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.NOTE_REST}, exclude={TokenCategory.NOTE})
        False
        >>> TokenCategoryHierarchyMapper.match(TokenCategory.NOTE, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
        True
        >>> TokenCategoryHierarchyMapper.match(TokenCategory.DURATION, include={TokenCategory.CORE}, exclude={TokenCategory.DURATION})
        False
    """
    include = cls._validate_include(include)
    exclude = cls._validate_exclude(exclude)

    return cls._match(category, include=include, exclude=exclude)

nodes(parent) classmethod

Get the all nodes of the subtree of the parent category.

Parameters:

Name Type Description Default
parent TokenCategory

The parent category.

required

Returns:

Type Description
Set[TokenCategory]

List[TokenCategory]: The list of nodes of the subtree of the parent category.

Source code in kernpy/core/tokens.py
396
397
398
399
400
401
402
403
404
405
406
407
408
@classmethod
def nodes(cls, parent: TokenCategory) -> Set[TokenCategory]:
    """
    Get the all nodes of the subtree of the parent category.

    Args:
        parent (TokenCategory): The parent category.

    Returns:
        List[TokenCategory]: The list of nodes of the subtree of the parent category.
    """
    subtree = cls._find_subtree(cls.hierarchy, parent)
    return cls._nodes(subtree) if subtree is not None else set()

tree() classmethod

Return a string representation of the category hierarchy, formatted similar to the output of the Unix 'tree' command.

Example output

. ├── STRUCTURAL ├── CORE │ ├── NOTE_REST │ │ ├── DURATION │ │ ├── NOTE │ │ │ ├── PITCH │ │ │ └── DECORATION │ │ └── REST │ ├── CHORD │ └── EMPTY ├── SIGNATURES │ ├── CLEF │ ├── TIME_SIGNATURE │ ├── METER_SYMBOL │ └── KEY_SIGNATURE ├── ENGRAVED_SYMBOLS ├── OTHER_CONTEXTUAL ├── BARLINES ├── COMMENTS │ ├── FIELD_COMMENTS │ └── LINE_COMMENTS ├── DYNAMICS ├── HARMONY ...

Source code in kernpy/core/tokens.py
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
@classmethod
def tree(cls) -> str:
    """
    Return a string representation of the category hierarchy,
    formatted similar to the output of the Unix 'tree' command.

    Example output:
        .
        ├── STRUCTURAL
        ├── CORE
        │   ├── NOTE_REST
        │   │   ├── DURATION
        │   │   ├── NOTE
        │   │   │   ├── PITCH
        │   │   │   └── DECORATION
        │   │   └── REST
        │   ├── CHORD
        │   └── EMPTY
        ├── SIGNATURES
        │   ├── CLEF
        │   ├── TIME_SIGNATURE
        │   ├── METER_SYMBOL
        │   └── KEY_SIGNATURE
        ├── ENGRAVED_SYMBOLS
        ├── OTHER_CONTEXTUAL
        ├── BARLINES
        ├── COMMENTS
        │   ├── FIELD_COMMENTS
        │   └── LINE_COMMENTS
        ├── DYNAMICS
        ├── HARMONY
        ...
    """
    def build_tree(tree: Dict[TokenCategory, '_hierarchy_typing'], prefix: str = "") -> [str]:
        lines_buffer = []
        items = list(tree.items())
        count = len(items)
        for index, (category, subtree) in enumerate(items):
            connector = "└── " if index == count - 1 else "├── "
            lines_buffer.append(prefix + connector + str(category))
            extension = "    " if index == count - 1 else "│   "
            lines_buffer.extend(build_tree(subtree, prefix + extension))
        return lines_buffer

    lines = ["."]
    lines.extend(build_tree(cls.hierarchy))
    return "\n".join(lines)

valid(include=None, exclude=None) classmethod

Get the valid categories based on the include and exclude sets.

Parameters:

Name Type Description Default
include Optional[Set[TokenCategory]]

The set of categories to include. Defaults to None. If None, all categories are included.

None
exclude Optional[Set[TokenCategory]]

The set of categories to exclude. Defaults to None. If None, no categories are excluded.

None

Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.

Source code in kernpy/core/tokens.py
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
@classmethod
def valid(cls,
          include: Optional[Set[TokenCategory]] = None,
          exclude: Optional[Set[TokenCategory]] = None) -> Set[TokenCategory]:
    """
    Get the valid categories based on the include and exclude sets.

    Args:
        include (Optional[Set[TokenCategory]]): The set of categories to include. Defaults to None. \
            If None, all categories are included.
        exclude (Optional[Set[TokenCategory]]): The set of categories to exclude. Defaults to None. \
            If None, no categories are excluded.

    Returns (Set[TokenCategory]): The list of valid categories based on the include and exclude sets.
    """
    include = cls._validate_include(include)
    exclude = cls._validate_exclude(exclude)

    included_nodes = set.union(*[(cls.nodes(cat) | {cat}) for cat in include]) if len(include) > 0 else include
    excluded_nodes = set.union(*[(cls.nodes(cat) | {cat}) for cat in exclude]) if len(exclude) > 0 else exclude
    return included_nodes - excluded_nodes

Tokenizer

Bases: ABC

Tokenizer interface. All tokenizers must implement this interface.

Tokenizers are responsible for converting a token into a string representation.

Source code in kernpy/core/tokenizers.py
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
class Tokenizer(ABC):
    """
    Tokenizer interface. All tokenizers must implement this interface.

    Tokenizers are responsible for converting a token into a string representation.
    """
    def __init__(self, *, token_categories: Set['TokenCategory']):
        """
        Create a new Tokenizer.

        Args:
            token_categories Set[TokenCategory]: List of categories to be tokenized.
                If None, an exception will be raised.
        """
        if token_categories is None:
            raise ValueError('Categories must be provided. Found None.')

        self.token_categories = token_categories


    @abstractmethod
    def tokenize(self, token: Token) -> str:
        """
        Tokenize a token into a string representation.

        Args:
            token (Token): Token to be tokenized.

        Returns (str): Tokenized string representation.

        """
        pass

__init__(*, token_categories)

Create a new Tokenizer.

Parameters:

Name Type Description Default
token_categories Set[TokenCategory]

List of categories to be tokenized. If None, an exception will be raised.

required
Source code in kernpy/core/tokenizers.py
54
55
56
57
58
59
60
61
62
63
64
65
def __init__(self, *, token_categories: Set['TokenCategory']):
    """
    Create a new Tokenizer.

    Args:
        token_categories Set[TokenCategory]: List of categories to be tokenized.
            If None, an exception will be raised.
    """
    if token_categories is None:
        raise ValueError('Categories must be provided. Found None.')

    self.token_categories = token_categories

tokenize(token) abstractmethod

Tokenize a token into a string representation.

Parameters:

Name Type Description Default
token Token

Token to be tokenized.

required

Returns (str): Tokenized string representation.

Source code in kernpy/core/tokenizers.py
68
69
70
71
72
73
74
75
76
77
78
79
@abstractmethod
def tokenize(self, token: Token) -> str:
    """
    Tokenize a token into a string representation.

    Args:
        token (Token): Token to be tokenized.

    Returns (str): Tokenized string representation.

    """
    pass

TokensTraversal

Bases: TreeTraversalInterface

Source code in kernpy/core/document.py
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
class TokensTraversal(TreeTraversalInterface):
    def __init__(
            self,
            non_repeated: bool,
            filter_by_categories
    ):
        """
        Create an instance of `TokensTraversal`.
        Args:
            non_repeated: If True, only unique tokens are returned. If False, all tokens are returned.
            filter_by_categories: A list of categories to filter the tokens. If None, all tokens are returned.
        """
        self.tokens = []
        self.seen_encodings = []
        self.non_repeated = non_repeated
        self.filter_by_categories = [t for t in TokenCategory] if filter_by_categories is None else filter_by_categories

    def visit(self, node):
        if (node.token
                and (not self.non_repeated or node.token.encoding not in self.seen_encodings)
                and (self.filter_by_categories is None or node.token.category in self.filter_by_categories)
        ):
            self.tokens.append(node.token)
            if self.non_repeated:
                self.seen_encodings.append(node.token.encoding)

__init__(non_repeated, filter_by_categories)

Create an instance of TokensTraversal. Args: non_repeated: If True, only unique tokens are returned. If False, all tokens are returned. filter_by_categories: A list of categories to filter the tokens. If None, all tokens are returned.

Source code in kernpy/core/document.py
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
def __init__(
        self,
        non_repeated: bool,
        filter_by_categories
):
    """
    Create an instance of `TokensTraversal`.
    Args:
        non_repeated: If True, only unique tokens are returned. If False, all tokens are returned.
        filter_by_categories: A list of categories to filter the tokens. If None, all tokens are returned.
    """
    self.tokens = []
    self.seen_encodings = []
    self.non_repeated = non_repeated
    self.filter_by_categories = [t for t in TokenCategory] if filter_by_categories is None else filter_by_categories

TraversalFactory

Source code in kernpy/core/document.py
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
class TraversalFactory:
    class Categories(Enum):
        METACOMMENTS = "metacomments"
        TOKENS = "tokens"

    @classmethod
    def create(
            cls,
            traversal_type: str,
            non_repeated: bool,
            filter_by_categories: Optional[Sequence[TokenCategory]]
    ) -> TreeTraversalInterface:
        """
        Create an instance of `TreeTraversalInterface` based on the `traversal_type`.
        Args:
            non_repeated:
            filter_by_categories:
            traversal_type: The type of traversal to use. Possible values are:
                - "metacomments"
                - "tokens"

        Returns: An instance of `TreeTraversalInterface`.
        """
        if traversal_type == cls.Categories.METACOMMENTS.value:
            return MetacommentsTraversal()
        elif traversal_type == cls.Categories.TOKENS.value:
            return TokensTraversal(non_repeated, filter_by_categories)

        raise ValueError(f"Unknown traversal type: {traversal_type}")

create(traversal_type, non_repeated, filter_by_categories) classmethod

Create an instance of TreeTraversalInterface based on the traversal_type. Args: non_repeated: filter_by_categories: traversal_type: The type of traversal to use. Possible values are: - "metacomments" - "tokens"

Returns: An instance of TreeTraversalInterface.

Source code in kernpy/core/document.py
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
@classmethod
def create(
        cls,
        traversal_type: str,
        non_repeated: bool,
        filter_by_categories: Optional[Sequence[TokenCategory]]
) -> TreeTraversalInterface:
    """
    Create an instance of `TreeTraversalInterface` based on the `traversal_type`.
    Args:
        non_repeated:
        filter_by_categories:
        traversal_type: The type of traversal to use. Possible values are:
            - "metacomments"
            - "tokens"

    Returns: An instance of `TreeTraversalInterface`.
    """
    if traversal_type == cls.Categories.METACOMMENTS.value:
        return MetacommentsTraversal()
    elif traversal_type == cls.Categories.TOKENS.value:
        return TokensTraversal(non_repeated, filter_by_categories)

    raise ValueError(f"Unknown traversal type: {traversal_type}")

TreeTraversalInterface

Bases: ABC

TreeTraversalInterface class.

This class is used to traverse the tree. The TreeTraversalInterface class is responsible for implementing the visit method.

Source code in kernpy/core/document.py
62
63
64
65
66
67
68
69
70
71
72
class TreeTraversalInterface(ABC):
    """
    TreeTraversalInterface class.

    This class is used to traverse the tree. The `TreeTraversalInterface` class is responsible for implementing
    the `visit` method.
    """

    @abstractmethod
    def visit(self, node):
        pass

agnostic_distance(first_pitch, second_pitch)

Calculate the distance in semitones between two pitches.

Parameters:

Name Type Description Default
first_pitch AgnosticPitch

The first pitch to compare.

required
second_pitch AgnosticPitch

The second pitch to compare.

required

Returns:

Name Type Description
int int

The distance in semitones between the two pitches.

Examples:

>>> agnostic_distance(AgnosticPitch('C4'), AgnosticPitch('E4'))
4
>>> agnostic_distance(AgnosticPitch('C4'), AgnosticPitch('B3'))
-1
Source code in kernpy/core/transposer.py
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
def agnostic_distance(
    first_pitch: AgnosticPitch,
    second_pitch: AgnosticPitch,
) -> int:
    """
    Calculate the distance in semitones between two pitches.

    Args:
        first_pitch (AgnosticPitch): The first pitch to compare.
        second_pitch (AgnosticPitch): The second pitch to compare.

    Returns:
        int: The distance in semitones between the two pitches.

    Examples:
        >>> agnostic_distance(AgnosticPitch('C4'), AgnosticPitch('E4'))
        4
        >>> agnostic_distance(AgnosticPitch('C4'), AgnosticPitch('B3'))
        -1
    """
    def semitone_index(p: AgnosticPitch) -> int:
        # base letter:
        letter = p.name.replace('+', '').replace('-', '')
        base = LETTER_TO_SEMITONES[letter]
        # accidentals: '+' is one sharp, '-' one flat
        alteration = p.name.count('+') - p.name.count('-')
        return p.octave * 12 + base + alteration

    return semitone_index(second_pitch) - semitone_index(first_pitch)

create(content, strict=False)

Create a Document object from a string encoded in Humdrum **kern format.

Args:
    content: String encoded in Humdrum **kern format
    strict: If True, raise an error if the **kern file has any errors. Otherwise, return a list of errors.

Returns (Document, list): Document object and list of error messages. Empty list if no errors.

Examples:
    >>> import kernpy as kp
    >>> document, errors = kp.create('**kern

4e 4f 4g - ') >>> if len(errors) > 0: >>> print(errors) ['Error: Invalid kern spine: 1', 'Error: Invalid *kern spine: 2']

Source code in kernpy/core/generic.py
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
@deprecated("Use 'loads' instead.")
def create(
        content: str,
        strict=False
) -> (Document, []):
    """
    Create a Document object from a string encoded in Humdrum **kern format.

    Args:
        content: String encoded in Humdrum **kern format
        strict: If True, raise an error if the **kern file has any errors. Otherwise, return a list of errors.

    Returns (Document, list): Document object and list of error messages. Empty list if no errors.

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.create('**kern\n4e\n4f\n4g\n*-\n')
        >>> if len(errors) > 0:
        >>>     print(errors)
        ['Error: Invalid **kern spine: 1', 'Error: Invalid **kern spine: 2']
    """
    return Generic.create(
        content=content,
        strict=strict
    )

deprecated(reason)

Decorator to mark a function or class as deprecated.

Parameters:

Name Type Description Default
reason str

The reason why the function/class is deprecated.

required
Example

@deprecated("Use new_function instead.") def old_function(): pass

Source code in kernpy/util/helpers.py
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
def deprecated(reason: str):
    """
    Decorator to mark a function or class as deprecated.

    Args:
        reason (str): The reason why the function/class is deprecated.

    Example:
        @deprecated("Use new_function instead.")
        def old_function():
            pass
    """
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            warnings.warn(
                f"'{func.__name__}' is deprecated: {reason}",
                category=DeprecationWarning,
                stacklevel=2,
            )
            return func(*args, **kwargs)
        return wrapper
    return decorator

distance(first_encoding, second_encoding, *, first_format=NotationEncoding.HUMDRUM.value, second_format=NotationEncoding.HUMDRUM.value)

Calculate the distance in semitones between two pitches.

Parameters:

Name Type Description Default
first_encoding str

The first pitch to compare.

required
second_encoding str

The second pitch to compare.

required
first_format str

The encoding format of the first pitch. Default is HUMDRUM.

HUMDRUM.value
second_format str

The encoding format of the second pitch. Default is HUMDRUM.

HUMDRUM.value

Returns:

Name Type Description
int int

The distance in semitones between the two pitches.

Examples:

>>> distance('C4', 'E4')
4
>>> distance('C4', 'B3')
-1
Source code in kernpy/core/transposer.py
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def distance(
    first_encoding: str,
    second_encoding: str,
    *,
    first_format: str = NotationEncoding.HUMDRUM.value,
    second_format: str = NotationEncoding.HUMDRUM.value,
) -> int:
    """
    Calculate the distance in semitones between two pitches.

    Args:
        first_encoding (str): The first pitch to compare.
        second_encoding (str): The second pitch to compare.
        first_format (str): The encoding format of the first pitch. Default is HUMDRUM.
        second_format (str): The encoding format of the second pitch. Default is HUMDRUM.

    Returns:
        int: The distance in semitones between the two pitches.

    Examples:
        >>> distance('C4', 'E4')
        4
        >>> distance('C4', 'B3')
        -1
    """
    first_importer = PitchImporterFactory.create(first_format)
    first_pitch: AgnosticPitch = first_importer.import_pitch(first_encoding)

    second_importer = PitchImporterFactory.create(second_format)
    second_pitch: AgnosticPitch = second_importer.import_pitch(second_encoding)

    return agnostic_distance(first_pitch, second_pitch)

ekern_to_krn(input_file, output_file)

Convert one .ekrn file to .krn file.

Parameters:

Name Type Description Default
input_file str

Filepath to the input **ekern

required
output_file str

Filepath to the output **kern

required

Returns: None

Example

Convert .ekrn to .krn

ekern_to_krn('path/to/file.ekrn', 'path/to/file.krn')

Convert a list of .ekrn files to .krn files

ekrn_files = your_modue.get_files()

# Use the wrapper to avoid stopping the process if an error occurs
def ekern_to_krn_wrapper(ekern_file, kern_file):
    try:
        ekern_to_krn(ekrn_files, output_folder)
    except Exception as e:
        print(f'Error:{e}')

# Convert all the files
for ekern_file in ekrn_files:
    output_file = ekern_file.replace('.ekrn', '.krn')
    ekern_to_krn_wrapper(ekern_file, output_file)
Source code in kernpy/core/exporter.py
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def ekern_to_krn(
        input_file: str,
        output_file: str
) -> None:
    """
    Convert one .ekrn file to .krn file.

    Args:
        input_file (str): Filepath to the input **ekern
        output_file (str): Filepath to the output **kern
    Returns:
        None

    Example:
        # Convert .ekrn to .krn
        >>> ekern_to_krn('path/to/file.ekrn', 'path/to/file.krn')

        # Convert a list of .ekrn files to .krn files
        ```python
        ekrn_files = your_modue.get_files()

        # Use the wrapper to avoid stopping the process if an error occurs
        def ekern_to_krn_wrapper(ekern_file, kern_file):
            try:
                ekern_to_krn(ekrn_files, output_folder)
            except Exception as e:
                print(f'Error:{e}')

        # Convert all the files
        for ekern_file in ekrn_files:
            output_file = ekern_file.replace('.ekrn', '.krn')
            ekern_to_krn_wrapper(ekern_file, output_file)
        ```
    """
    with open(input_file, 'r') as file:
        content = file.read()

    kern_content = get_kern_from_ekern(content)

    with open(output_file, 'w') as file:
        file.write(kern_content)

export(document, options)

Export a Document object to a string.

Parameters:

Name Type Description Default
document Document

Document object to export

required
options ExportOptions

Export options

required

Returns: Exported string

Examples:

>>> import kernpy as kp
>>> document, errors = kp.read('path/to/file.krn')
>>> options = kp.ExportOptions()
>>> content = kp.export(document, options)
Source code in kernpy/core/generic.py
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
@deprecated("Use 'dumps' instead.")
def export(
        document: Document,
        options: ExportOptions
) -> str:
    """
    Export a Document object to a string.

    Args:
        document: Document object to export
        options: Export options

    Returns: Exported string

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.read('path/to/file.krn')
        >>> options = kp.ExportOptions()
        >>> content = kp.export(document, options)
    """
    return Generic.export(
        document=document,
        options=options
    )

get_kern_from_ekern(ekern_content)

Read the content of a ekern file and return the kern content.

Parameters:

Name Type Description Default
ekern_content str

The content of the **ekern file.

required

Returns: The content of the **kern file.

Example
# Read **ekern file
ekern_file = 'path/to/file.ekrn'
with open(ekern_file, 'r') as file:
    ekern_content = file.read()

# Get **kern content
kern_content = get_kern_from_ekern(ekern_content)
with open('path/to/file.krn', 'w') as file:
    file.write(kern_content)

Source code in kernpy/core/exporter.py
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
def get_kern_from_ekern(ekern_content: str) -> str:
    """
    Read the content of a **ekern file and return the **kern content.

    Args:
        ekern_content: The content of the **ekern file.
    Returns:
        The content of the **kern file.

    Example:
        ```python
        # Read **ekern file
        ekern_file = 'path/to/file.ekrn'
        with open(ekern_file, 'r') as file:
            ekern_content = file.read()

        # Get **kern content
        kern_content = get_kern_from_ekern(ekern_content)
        with open('path/to/file.krn', 'w') as file:
            file.write(kern_content)

        ```
    """
    content = ekern_content.replace("**ekern", "**kern")  # TODO Constante según las cabeceras
    content = content.replace(TOKEN_SEPARATOR, "")
    content = content.replace(DECORATION_SEPARATOR, "")

    return content

get_spine_types(document, spine_types=None)

Get the spines of a Document object.

Parameters:

Name Type Description Default
document Document

Document object to get spines from

required
spine_types Optional[Sequence[str]]

List of spine types to get. If None, all spines are returned.

None

Returns (List[str]): List of spines

Examples:

>>> import kernpy as kp
>>> document, _ = kp.read('path/to/file.krn')
>>> kp.get_spine_types(document)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> kp.get_spine_types(document, None)
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> kp.get_spine_types(document, ['**kern'])
['**kern', '**kern', '**kern', '**kern']
>>> kp.get_spine_types(document, ['**kern', '**root'])
['**kern', '**kern', '**kern', '**kern', '**root']
>>> kp.get_spine_types(document, ['**kern', '**root', '**harm'])
['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
>>> kp.get_spine_types(document, [])
[]
Source code in kernpy/core/generic.py
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
@deprecated("Use 'spine_types' instead.")
def get_spine_types(
        document: Document,
        spine_types: Optional[Sequence[str]] = None
) -> List[str]:
    """
    Get the spines of a Document object.

    Args:
        document (Document): Document object to get spines from
        spine_types (Optional[Sequence[str]]): List of spine types to get. If None, all spines are returned.

    Returns (List[str]): List of spines

    Examples:
        >>> import kernpy as kp
        >>> document, _ = kp.read('path/to/file.krn')
        >>> kp.get_spine_types(document)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> kp.get_spine_types(document, None)
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> kp.get_spine_types(document, ['**kern'])
        ['**kern', '**kern', '**kern', '**kern']
        >>> kp.get_spine_types(document, ['**kern', '**root'])
        ['**kern', '**kern', '**kern', '**kern', '**root']
        >>> kp.get_spine_types(document, ['**kern', '**root', '**harm'])
        ['**kern', '**kern', '**kern', '**kern', '**root', '**harm']
        >>> kp.get_spine_types(document, [])
        []
    """
    return Generic.get_spine_types(
        document=document,
        spine_types=spine_types
    )

kern_to_ekern(input_file, output_file)

Convert one .krn file to .ekrn file

Parameters:

Name Type Description Default
input_file str

Filepath to the input **kern

required
output_file str

Filepath to the output **ekern

required

Returns:

Type Description
None

None

Example

Convert .krn to .ekrn

kern_to_ekern('path/to/file.krn', 'path/to/file.ekrn')

Convert a list of .krn files to .ekrn files

krn_files = your_module.get_files()

# Use the wrapper to avoid stopping the process if an error occurs
def kern_to_ekern_wrapper(krn_file, ekern_file):
    try:
        kern_to_ekern(krn_file, ekern_file)
    except Exception as e:
        print(f'Error:{e}')

# Convert all the files
for krn_file in krn_files:
    output_file = krn_file.replace('.krn', '.ekrn')
    kern_to_ekern_wrapper(krn_file, output_file)
Source code in kernpy/core/exporter.py
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
def kern_to_ekern(
        input_file: str,
        output_file: str
) -> None:
    """
    Convert one .krn file to .ekrn file

    Args:
        input_file (str): Filepath to the input **kern
        output_file (str): Filepath to the output **ekern

    Returns:
        None

    Example:
        # Convert .krn to .ekrn
        >>> kern_to_ekern('path/to/file.krn', 'path/to/file.ekrn')

        # Convert a list of .krn files to .ekrn files
        ```python
        krn_files = your_module.get_files()

        # Use the wrapper to avoid stopping the process if an error occurs
        def kern_to_ekern_wrapper(krn_file, ekern_file):
            try:
                kern_to_ekern(krn_file, ekern_file)
            except Exception as e:
                print(f'Error:{e}')

        # Convert all the files
        for krn_file in krn_files:
            output_file = krn_file.replace('.krn', '.ekrn')
            kern_to_ekern_wrapper(krn_file, output_file)
        ```

    """
    importer = Importer()
    document = importer.import_file(input_file)

    if len(importer.errors):
        raise Exception(f'ERROR: {input_file} has errors {importer.get_error_messages()}')

    export_options = ExportOptions(spine_types=['**kern'], token_categories=BEKERN_CATEGORIES,
                                   kern_type=Encoding.eKern)
    exporter = Exporter()
    exported_ekern = exporter.export_string(document, export_options)

    with open(output_file, 'w') as file:
        file.write(exported_ekern)

read(path, strict=False)

Read a Humdrum **kern file.

Parameters:

Name Type Description Default
path Union[str, Path]

File path to read

required
strict Optional[bool]

If True, raise an error if the **kern file has any errors. Otherwise, return a list of errors.

False

Returns (Document, List[str]): Document object and list of error messages. Empty list if no errors.

Examples:

>>> import kernpy as kp
>>> document, _ = kp.read('path/to/file.krn')
>>> document, errors = kp.read('path/to/file.krn')
>>> if len(errors) > 0:
>>>     print(errors)
['Error: Invalid **kern spine: 1', 'Error: Invalid **kern spine: 2']
Source code in kernpy/core/generic.py
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
@deprecated("Use 'load' instead.")
def read(
        path: Union[str, Path],
        strict: Optional[bool] = False
) -> (Document, List[str]):
    """
    Read a Humdrum **kern file.

    Args:
        path (Union[str, Path]): File path to read
        strict (Optional[bool]): If True, raise an error if the **kern file has any errors. Otherwise, return a list of errors.

    Returns (Document, List[str]): Document object and list of error messages. Empty list if no errors.

    Examples:
        >>> import kernpy as kp
        >>> document, _ = kp.read('path/to/file.krn')

        >>> document, errors = kp.read('path/to/file.krn')
        >>> if len(errors) > 0:
        >>>     print(errors)
        ['Error: Invalid **kern spine: 1', 'Error: Invalid **kern spine: 2']
    """
    return Generic.read(
        path=Path(path),
        strict=strict
    )

store(document, path, options)

Store a Document object to a file.

Parameters:

Name Type Description Default
document Document

Document object to store

required
path Union[str, Path]

File path to store

required
options ExportOptions

Export options

required

Returns: None

Examples:

>>> import kernpy as kp
>>> document, errors = kp.read('path/to/file.krn')
>>> options = kp.ExportOptions()
>>> kp.store(document, 'path/to/store.krn', options)
Source code in kernpy/core/generic.py
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
@deprecated("Use 'dump' instead.")
def store(
        document: Document,
        path: Union[str, Path],
        options: ExportOptions
) -> None:
    """
    Store a Document object to a file.

    Args:
        document (Document): Document object to store
        path (Union[str, Path]): File path to store
        options (ExportOptions): Export options

    Returns: None

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.read('path/to/file.krn')
        >>> options = kp.ExportOptions()
        >>> kp.store(document, 'path/to/store.krn', options)

    """
    Generic.store(
        document=document,
        path=Path(path),
        options=options
    )

store_graph(document, path)

Create a graph representation of a Document object using Graphviz. Save the graph to a file.

Parameters:

Name Type Description Default
document Document

Document object to create graph from

required
path str

File path to save the graph

required

Returns (None): None

Examples:

>>> import kernpy as kp
>>> document, errors = kp.read('path/to/file.krn')
>>> kp.store_graph(document, 'path/to/graph.dot')
Source code in kernpy/core/generic.py
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@deprecated("Use 'graph' instead.")
def store_graph(
        document: Document,
        path: Union[str, Path]
) -> None:
    """
    Create a graph representation of a Document object using Graphviz. Save the graph to a file.

    Args:
        document (Document): Document object to create graph from
        path (str): File path to save the graph

    Returns (None): None

    Examples:
        >>> import kernpy as kp
        >>> document, errors = kp.read('path/to/file.krn')
        >>> kp.store_graph(document, 'path/to/graph.dot')
    """
    return Generic.store_graph(
        document=document,
        path=Path(path)
    )

transpose(input_encoding, interval, input_format=NotationEncoding.HUMDRUM.value, output_format=NotationEncoding.HUMDRUM.value, direction=Direction.UP.value)

Transpose a pitch by a given interval.

The pitch must be in the American notation.

Parameters:

Name Type Description Default
input_encoding str

The pitch to transpose.

required
interval int

The interval to transpose the pitch.

required
input_format str

The encoding format of the pitch. Default is HUMDRUM.

HUMDRUM.value
output_format str

The encoding format of the transposed pitch. Default is HUMDRUM.

HUMDRUM.value
direction str

The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

UP.value

Returns:

Name Type Description
str str

The transposed pitch.

Examples:

>>> transpose('ccc', IntervalsByName['P4'], input_format='kern', output_format='kern')
'fff'
>>> transpose('ccc', IntervalsByName['P4'], input_format=NotationEncoding.HUMDRUM.value)
'fff'
>>> transpose('ccc', IntervalsByName['P4'], input_format='kern', direction='down')
'gg'
>>> transpose('ccc', IntervalsByName['P4'], input_format='kern', direction=Direction.DOWN.value)
'gg'
>>> transpose('ccc#', IntervalsByName['P4'])
'fff#'
>>> transpose('G4', IntervalsByName['m3'], input_format='american')
'Bb4'
>>> transpose('G4', IntervalsByName['m3'], input_format=NotationEncoding.AMERICAN.value)
'Bb4'
>>> transpose('C3', IntervalsByName['P4'], input_format='american', direction='down')
'G2'
Source code in kernpy/core/transposer.py
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
def transpose(
        input_encoding: str,
        interval: int,
        input_format: str = NotationEncoding.HUMDRUM.value,
        output_format: str = NotationEncoding.HUMDRUM.value,
        direction: str = Direction.UP.value
) -> str:
    """
    Transpose a pitch by a given interval.

    The pitch must be in the American notation.

    Args:
        input_encoding (str): The pitch to transpose.
        interval (int): The interval to transpose the pitch.
        input_format (str): The encoding format of the pitch. Default is HUMDRUM.
        output_format (str): The encoding format of the transposed pitch. Default is HUMDRUM.
        direction (str): The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

    Returns:
        str: The transposed pitch.

    Examples:
        >>> transpose('ccc', IntervalsByName['P4'], input_format='kern', output_format='kern')
        'fff'
        >>> transpose('ccc', IntervalsByName['P4'], input_format=NotationEncoding.HUMDRUM.value)
        'fff'
        >>> transpose('ccc', IntervalsByName['P4'], input_format='kern', direction='down')
        'gg'
        >>> transpose('ccc', IntervalsByName['P4'], input_format='kern', direction=Direction.DOWN.value)
        'gg'
        >>> transpose('ccc#', IntervalsByName['P4'])
        'fff#'
        >>> transpose('G4', IntervalsByName['m3'], input_format='american')
        'Bb4'
        >>> transpose('G4', IntervalsByName['m3'], input_format=NotationEncoding.AMERICAN.value)
        'Bb4'
        >>> transpose('C3', IntervalsByName['P4'], input_format='american', direction='down')
        'G2'


    """
    importer = PitchImporterFactory.create(input_format)
    pitch: AgnosticPitch = importer.import_pitch(input_encoding)

    transposed_pitch = transpose_agnostics(pitch, interval, direction=direction)

    exporter = PitchExporterFactory.create(output_format)
    content = exporter.export_pitch(transposed_pitch)

    return content

transpose_agnostic_to_encoding(agnostic_pitch, interval, output_format=NotationEncoding.HUMDRUM.value, direction=Direction.UP.value)

Transpose an AgnosticPitch by a given interval.

Parameters:

Name Type Description Default
agnostic_pitch AgnosticPitch

The pitch to transpose.

required
interval int

The interval to transpose the pitch.

required
output_format Optional[str]

The encoding format of the transposed pitch. Default is HUMDRUM.

HUMDRUM.value
direction Optional[str]

The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

UP.value

Returns (str): str: The transposed pitch.

Examples:

>>> transpose_agnostic_to_encoding(AgnosticPitch('C', 4), IntervalsByName['P4'])
'F4'
>>> transpose_agnostic_to_encoding(AgnosticPitch('C', 4), IntervalsByName['P4'], direction='down')
'G3'
>>> transpose_agnostic_to_encoding(AgnosticPitch('C#', 4), IntervalsByName['P4'])
'F#4'
>>> transpose_agnostic_to_encoding(AgnosticPitch('G', 4), IntervalsByName['m3'], direction='down')
'Bb4'
Source code in kernpy/core/transposer.py
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
def transpose_agnostic_to_encoding(
        agnostic_pitch: AgnosticPitch,
        interval: int,
        output_format: str = NotationEncoding.HUMDRUM.value,
        direction: str = Direction.UP.value
) -> str:
    """
    Transpose an AgnosticPitch by a given interval.

    Args:
        agnostic_pitch (AgnosticPitch): The pitch to transpose.
        interval (int): The interval to transpose the pitch.
        output_format (Optional[str]): The encoding format of the transposed pitch. Default is HUMDRUM.
        direction (Optional[str]): The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

    Returns (str):
        str: The transposed pitch.

    Examples:
        >>> transpose_agnostic_to_encoding(AgnosticPitch('C', 4), IntervalsByName['P4'])
        'F4'
        >>> transpose_agnostic_to_encoding(AgnosticPitch('C', 4), IntervalsByName['P4'], direction='down')
        'G3'
        >>> transpose_agnostic_to_encoding(AgnosticPitch('C#', 4), IntervalsByName['P4'])
        'F#4'
        >>> transpose_agnostic_to_encoding(AgnosticPitch('G', 4), IntervalsByName['m3'], direction='down')
        'Bb4'
    """
    exporter = PitchExporterFactory.create(output_format)
    transposed_pitch = transpose_agnostics(agnostic_pitch, interval, direction=direction)
    content = exporter.export_pitch(transposed_pitch)

    return content

transpose_agnostics(input_pitch, interval, direction=Direction.UP.value)

Transpose an AgnosticPitch by a given interval.

Parameters:

Name Type Description Default
input_pitch AgnosticPitch

The pitch to transpose.

required
interval int

The interval to transpose the pitch.

required
direction str

The direction of the transposition. 'UP' or 'DOWN'. Default is 'UP'.

UP.value
Returns

AgnosticPitch: The transposed pitch.

Examples:

>>> transpose_agnostics(AgnosticPitch('C', 4), IntervalsByName['P4'])
AgnosticPitch('F', 4)
>>> transpose_agnostics(AgnosticPitch('C', 4), IntervalsByName['P4'], direction='down')
AgnosticPitch('G', 3)
>>> transpose_agnostics(AgnosticPitch('C#', 4), IntervalsByName['P4'])
AgnosticPitch('F#', 4)
>>> transpose_agnostics(AgnosticPitch('G', 4), IntervalsByName['m3'], direction='down')
AgnosticPitch('Bb', 4)
Source code in kernpy/core/transposer.py
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
def transpose_agnostics(
        input_pitch: AgnosticPitch,
        interval: int,
        direction: str = Direction.UP.value
) -> AgnosticPitch:
    """
    Transpose an AgnosticPitch by a given interval.

    Args:
        input_pitch (AgnosticPitch): The pitch to transpose.
        interval (int): The interval to transpose the pitch.
        direction (str): The direction of the transposition. 'UP' or 'DOWN'. Default is 'UP'.

    Returns :
        AgnosticPitch: The transposed pitch.

    Examples:
        >>> transpose_agnostics(AgnosticPitch('C', 4), IntervalsByName['P4'])
        AgnosticPitch('F', 4)
        >>> transpose_agnostics(AgnosticPitch('C', 4), IntervalsByName['P4'], direction='down')
        AgnosticPitch('G', 3)
        >>> transpose_agnostics(AgnosticPitch('C#', 4), IntervalsByName['P4'])
        AgnosticPitch('F#', 4)
        >>> transpose_agnostics(AgnosticPitch('G', 4), IntervalsByName['m3'], direction='down')
        AgnosticPitch('Bb', 4)

    """
    return AgnosticPitch.to_transposed(input_pitch, interval, direction)

transpose_encoding_to_agnostic(input_encoding, interval, input_format=NotationEncoding.HUMDRUM.value, direction=Direction.UP.value)

Transpose a pitch by a given interval.

The pitch must be in the American notation.

Parameters:

Name Type Description Default
input_encoding str

The pitch to transpose.

required
interval int

The interval to transpose the pitch.

required
input_format str

The encoding format of the pitch. Default is HUMDRUM.

HUMDRUM.value
direction str

The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

UP.value

Returns:

Name Type Description
AgnosticPitch AgnosticPitch

The transposed pitch.

Examples:

>>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format='kern')
AgnosticPitch('fff', 4)
>>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format=NotationEncoding.HUMDRUM.value)
AgnosticPitch('fff', 4)
>>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format='kern', direction='down')
AgnosticPitch('gg', 3)
>>> transpose_encoding_to_agnostic('ccc#', IntervalsByName['P4'])
AgnosticPitch('fff#', 4)
>>> transpose_encoding_to_agnostic('G4', IntervalsByName['m3'], input_format='american')
AgnosticPitch('Bb4', 4)
>>> transpose_encoding_to_agnostic('C3', IntervalsByName['P4'], input_format='american', direction='down')
AgnosticPitch('G2', 2)
Source code in kernpy/core/transposer.py
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
def transpose_encoding_to_agnostic(
        input_encoding: str,
        interval: int,
        input_format: str = NotationEncoding.HUMDRUM.value,
        direction: str = Direction.UP.value
) -> AgnosticPitch:
    """
    Transpose a pitch by a given interval.

    The pitch must be in the American notation.

    Args:
        input_encoding (str): The pitch to transpose.
        interval (int): The interval to transpose the pitch.
        input_format (str): The encoding format of the pitch. Default is HUMDRUM.
        direction (str): The direction of the transposition.'UP' or 'DOWN' Default is 'UP'.

    Returns:
        AgnosticPitch: The transposed pitch.

    Examples:
        >>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format='kern')
        AgnosticPitch('fff', 4)
        >>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format=NotationEncoding.HUMDRUM.value)
        AgnosticPitch('fff', 4)
        >>> transpose_encoding_to_agnostic('ccc', IntervalsByName['P4'], input_format='kern', direction='down')
        AgnosticPitch('gg', 3)
        >>> transpose_encoding_to_agnostic('ccc#', IntervalsByName['P4'])
        AgnosticPitch('fff#', 4)
        >>> transpose_encoding_to_agnostic('G4', IntervalsByName['m3'], input_format='american')
        AgnosticPitch('Bb4', 4)
        >>> transpose_encoding_to_agnostic('C3', IntervalsByName['P4'], input_format='american', direction='down')
        AgnosticPitch('G2', 2)

    """
    importer = PitchImporterFactory.create(input_format)
    pitch: AgnosticPitch = importer.import_pitch(input_encoding)

    return transpose_agnostics(pitch, interval, direction=direction)

kernpy.util

=====

This module contains utility functions for the kernpy package.

StoreCache

A simple cache that stores the result of a callback function

Source code in kernpy/util/store_cache.py
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class StoreCache:
    """
    A simple cache that stores the result of a callback function
    """
    def __init__(self):
        """
        Constructor
        """
        self.memory = {}

    def request(self, callback, request):
        """
        Request a value from the cache. If the value is not in the cache, it will be calculated by the callback function
        Args:
            callback (function): The callback function that will be called to calculate the value
            request (any): The request that will be passed to the callback function

        Returns (any): The value that was requested

        Examples:
            >>> def add_five(x):
            ...     return x + 5
            >>> store_cache = StoreCache()
            >>> store_cache.request(callback, 5)  # Call the callback function
            10
            >>> store_cache.request(callback, 5)  # Return the value from the cache, without calling the callback function
            10
        """
        if request in self.memory:
            return self.memory[request]
        else:
            result = callback(request)
            self.memory[request] = result
            return result

__init__()

Constructor

Source code in kernpy/util/store_cache.py
5
6
7
8
9
def __init__(self):
    """
    Constructor
    """
    self.memory = {}

request(callback, request)

Request a value from the cache. If the value is not in the cache, it will be calculated by the callback function Args: callback (function): The callback function that will be called to calculate the value request (any): The request that will be passed to the callback function

Returns (any): The value that was requested

Examples:

>>> def add_five(x):
...     return x + 5
>>> store_cache = StoreCache()
>>> store_cache.request(callback, 5)  # Call the callback function
10
>>> store_cache.request(callback, 5)  # Return the value from the cache, without calling the callback function
10
Source code in kernpy/util/store_cache.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
def request(self, callback, request):
    """
    Request a value from the cache. If the value is not in the cache, it will be calculated by the callback function
    Args:
        callback (function): The callback function that will be called to calculate the value
        request (any): The request that will be passed to the callback function

    Returns (any): The value that was requested

    Examples:
        >>> def add_five(x):
        ...     return x + 5
        >>> store_cache = StoreCache()
        >>> store_cache.request(callback, 5)  # Call the callback function
        10
        >>> store_cache.request(callback, 5)  # Return the value from the cache, without calling the callback function
        10
    """
    if request in self.memory:
        return self.memory[request]
    else:
        result = callback(request)
        self.memory[request] = result
        return result