Datasets:
id stringlengths 16 16 | source stringclasses 2
values | repo stringclasses 92
values | specification stringlengths 78 30.9k | code stringlengths 0 5k | url stringlengths 37 200 |
|---|---|---|---|---|---|
fb9b03a14c7fe9a0 | docstring | paramiko/paramiko | def _wait_for_send_window(self, size):
(You are already holding the lock.)
Wait for the send window to open up, and allocate up to ``size`` bytes
for transmission. If no space opens up before the timeout, a timeout
exception is raised. Returns the number of bytes available to send
(may be less than requested).
Rais... | def _wait_for_send_window(self, size):
"""
(You are already holding the lock.)
Wait for the send window to open up, and allocate up to ``size`` bytes
for transmission. If no space opens up before the timeout, a timeout
exception is raised. Returns the number of bytes available ... | https://github.com/paramiko/paramiko/blob/HEAD/paramiko/channel.py |
d81e5b0e027dff38 | docstring | joblib/joblib | def save(self, obj):
Subclass the Pickler `save` method.
This is a total abuse of the Pickler class in order to use the numpy
persistence function `save` instead of the default pickle
implementation. The numpy array is replaced by a custom wrapper in the
pickle persistence stack and the serialized array is written ri... | def save(self, obj):
"""Subclass the Pickler `save` method.
This is a total abuse of the Pickler class in order to use the numpy
persistence function `save` instead of the default pickle
implementation. The numpy array is replaced by a custom wrapper in the
pickle persistence st... | https://github.com/joblib/joblib/blob/HEAD/joblib/numpy_pickle.py |
9494098ea32dca12 | docstring | kubernetes-client/python | def __init__(self, ca_bundle=None, service=None, url=None, local_vars_configuration=None): # noqa: E501
"""AdmissionregistrationV1WebhookClientConfig - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
AdmissionregistrationV1WebhookClientConfig - a model defined in OpenA... | def __init__(self, ca_bundle=None, service=None, url=None, local_vars_configuration=None): # noqa: E501
"""AdmissionregistrationV1WebhookClientConfig - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.lo... | https://github.com/kubernetes-client/python/blob/HEAD/kubernetes/client/models/admissionregistration_v1_webhook_client_config.py |
f9afcf5bbf6bfc2c | docstring | dropbox/dropbox-sdk-python | def files_get_preview(self,
path,
rev=None):
Get a preview for a file. Currently, PDF previews are generated for
files with the following extensions: .ai, .doc, .docm, .docx, .eps,
.gdoc, .gslides, .odp, .odt, .pps, .ppsm, .ppsx, .ppt, .pptm, .pptx,
.rtf. HTML previe... | def files_get_preview(self,
path,
rev=None):
"""
Get a preview for a file. Currently, PDF previews are generated for
files with the following extensions: .ai, .doc, .docm, .docx, .eps,
.gdoc, .gslides, .odp, .odt, .pps, .ppsm, .ppsx, .p... | https://github.com/dropbox/dropbox-sdk-python/blob/HEAD/dropbox/base.py |
8650a40ef0dc50ff | docstring | astropy/astropy | def two_sum(a, b):
Add ``a`` and ``b`` exactly, returning the result as two float64s.
The first is the approximate sum (with some floating point error)
and the second is the error of the float64 sum.
Using the procedure of Shewchuk, 1997,
Discrete & Computational Geometry 18(3):305-363
http://www.cs.berkeley.edu/~jrs... | def two_sum(a, b):
"""
Add ``a`` and ``b`` exactly, returning the result as two float64s.
The first is the approximate sum (with some floating point error)
and the second is the error of the float64 sum.
Using the procedure of Shewchuk, 1997,
Discrete & Computational Geometry 18(3):305-363
... | https://github.com/astropy/astropy/blob/HEAD/astropy/time/utils.py |
57b9717de8dbe4cd | docstring | redis/redis-py | def __getitem__(self, name: KeyT):
Return the value at key ``name``, raises a KeyError if the key
doesn't exist.
Raises:
KeyError(name) | def __getitem__(self, name: KeyT):
"""
Return the value at key ``name``, raises a KeyError if the key
doesn't exist.
"""
value = self.get(name)
if value is not None:
return value
raise KeyError(name) | https://github.com/redis/redis-py/blob/HEAD/redis/commands/core.py |
72cbb2a7ec854c31 | docstring | twisted/twisted | def _fromPrivateOpenSSH_PEM(cls, data, passphrase):
Return a private key object corresponding to this OpenSSH private key
string, in the old PEM-based format.
The format of a PEM-based OpenSSH private key string is::
-----BEGIN <key type> PRIVATE KEY-----
[Proc-Type: 4,ENCRYPTED
DEK-Info: DES-EDE3-CBC,<in... | def _fromPrivateOpenSSH_PEM(cls, data, passphrase):
"""
Return a private key object corresponding to this OpenSSH private key
string, in the old PEM-based format.
The format of a PEM-based OpenSSH private key string is::
-----BEGIN <key type> PRIVATE KEY-----
[Pr... | https://github.com/twisted/twisted/blob/HEAD/src/twisted/conch/ssh/keys.py |
7327149a4d0ded9e | docstring | scikit-learn/scikit-learn | def check_classifier_not_supporting_multiclass(name, estimator_orig):
Check that if the classifier has tags.classifier_tags.multi_class=False,
then it should raise a ValueError when calling fit with a multiclass dataset.
This test is not yielded if the tag is not False. | def check_classifier_not_supporting_multiclass(name, estimator_orig):
"""Check that if the classifier has tags.classifier_tags.multi_class=False,
then it should raise a ValueError when calling fit with a multiclass dataset.
This test is not yielded if the tag is not False.
"""
estimator = clone(est... | https://github.com/scikit-learn/scikit-learn/blob/HEAD/sklearn/utils/estimator_checks.py |
c4f457d53562d180 | docstring | googleapis/python-bigquery | def _field_to_json(field, row_value):
Convert a field into JSON-serializable values.
Args:
field (google.cloud.bigquery.schema.SchemaField):
The SchemaField to use for type conversion and field name.
row_value (Union[Sequence[List], Any]):
Row data to be inserted. If the SchemaField's mode is... | def _field_to_json(field, row_value):
"""Convert a field into JSON-serializable values.
Args:
field (google.cloud.bigquery.schema.SchemaField):
The SchemaField to use for type conversion and field name.
row_value (Union[Sequence[List], Any]):
Row data to be inserted. If... | https://github.com/googleapis/python-bigquery/blob/HEAD/google/cloud/bigquery/_helpers.py |
962e8c91a27476d1 | docstring | aws/aws-cli | def _load_and_validate_digest(
self, public_keys, bucket, key, is_backfill=False
):
Loads and validates a digest from S3.
:param public_keys: Public key dictionary of fingerprint to dict.
:param bucket: S3 bucket name
:param key: S3 key for the digest file
:param is_backfill: Flag indicating if this is a ... | def _load_and_validate_digest(
self, public_keys, bucket, key, is_backfill=False
):
"""Loads and validates a digest from S3.
:param public_keys: Public key dictionary of fingerprint to dict.
:param bucket: S3 bucket name
:param key: S3 key for the digest file
:param ... | https://github.com/aws/aws-cli/blob/HEAD/awscli/customizations/cloudtrail/validation.py |
725f3f939e159d8f | docstring | redis/redis-py | async def config_get(self, option: str) -> str:
Get runtime configuration option value.
### Parameters
- **option**: the name of the configuration option.
For more information see `FT.CONFIG GET <https://redis.io/commands/ft.config-get>`_. | async def config_get(self, option: str) -> str:
"""Get runtime configuration option value.
### Parameters
- **option**: the name of the configuration option.
For more information see `FT.CONFIG GET <https://redis.io/commands/ft.config-get>`_.
""" # noqa
cmd = [CONFIG_... | https://github.com/redis/redis-py/blob/HEAD/redis/commands/search/commands.py |
0e77a74d75b47002 | github_issue | tiangolo/fastapi | fix: sanitize subprocess call in translate.py
## Summary
Fix critical severity security issue in `scripts/translate.py`.
## Vulnerability
| Field | Value |
|-------|-------|
| **ID** | V-001 |
| **Severity** | CRITICAL |
| **Scanner** | multi_agent_ai |
| **Rule** | `V-001` |
| **File** | `scripts/translate.py:420` |... | https://github.com/fastapi/fastapi/pull/15306 | |
32abb0da3c5b1251 | docstring | aio-libs/aiohttp | def scheme(self) -> str:
A string representing the scheme of the request.
Hostname is resolved in this order:
- overridden value by .clone(scheme=new_scheme) call.
- type of connection to peer: HTTPS if socket is SSL, HTTP otherwise.
'http' or 'https'. | def scheme(self) -> str:
"""A string representing the scheme of the request.
Hostname is resolved in this order:
- overridden value by .clone(scheme=new_scheme) call.
- type of connection to peer: HTTPS if socket is SSL, HTTP otherwise.
'http' or 'https'.
"""
i... | https://github.com/aio-libs/aiohttp/blob/HEAD/aiohttp/web_request.py |
91d6950d9305245a | docstring | google/flax | def variable_name_from_type(
typ: tp.Type[Variable[tp.Any]], /, *, allow_register: bool = False
) -> str:
Given an NNX Variable type, get its Linen-style collection name.
Should output the exact inversed result of `variable_type_from_name()`.
Raises:
ValueError(f'Type {typ} is not registered in the registry. T... | def variable_name_from_type(
typ: tp.Type[Variable[tp.Any]], /, *, allow_register: bool = False
) -> str:
"""Given an NNX Variable type, get its Linen-style collection name.
Should output the exact inversed result of `variable_type_from_name()`."""
for name, t in VariableTypeCache.items():
if typ == t:
... | https://github.com/google/flax/blob/HEAD/flax/nnx/variablelib.py |
a42b52ea1a3f9182 | docstring | huggingface/peft | def forward(self, indices):
Computes the prompt embeddings and applies delta adjustments.
Args:
indices (torch.Tensor):
Indices of the tokens to be embedded.
Returns:
torch.Tensor:
Sum of prompt embeddings and delta embeddings. | def forward(self, indices):
"""
Computes the prompt embeddings and applies delta adjustments.
Args:
indices (torch.Tensor):
Indices of the tokens to be embedded.
Returns:
torch.Tensor:
Sum of prompt embeddings and delta embeddings... | https://github.com/huggingface/peft/blob/HEAD/src/peft/tuners/cpt/model.py |
7da313f7dd07541c | docstring | chroma-core/chroma | def _encode_image(self, image: Image) -> Embedding:
Encode an image using the Open CLIP model.
Args:
image: The image to encode.
Returns:
The embedding for the image. | def _encode_image(self, image: Image) -> Embedding:
"""
Encode an image using the Open CLIP model.
Args:
image: The image to encode.
Returns:
The embedding for the image.
"""
pil_image = self._PILImage.fromarray(image)
with self._torch.no... | https://github.com/chroma-core/chroma/blob/HEAD/chromadb/utils/embedding_functions/open_clip_embedding_function.py |
fedcdc9af4e6305d | docstring | astropy/astropy | def _custom_model_wrapper(func, fit_deriv=None):
Internal implementation `~astropy.modeling.custom_model`.
When `~astropy.modeling.custom_model` is called as a function its
arguments are passed to this function, and the result of this
function is returned.
When `~astropy.modeling.custom_model` is used as a decorator... | def _custom_model_wrapper(func, fit_deriv=None):
"""
Internal implementation `~astropy.modeling.custom_model`.
When `~astropy.modeling.custom_model` is called as a function its
arguments are passed to this function, and the result of this
function is returned.
When `~astropy.modeling.custom_mo... | https://github.com/astropy/astropy/blob/HEAD/astropy/modeling/core.py |
912ba4fcd590a15b | docstring | googleapis/python-bigquery | def from_api_repr(cls, resource: dict) -> "RoutineArgument":
Factory: construct a routine argument given its API representation.
Args:
resource (Dict[str, object]): Resource, as returned from the API.
Returns:
google.cloud.bigquery.routine.RoutineArgument:
Python object, as parsed from ``resource``. | def from_api_repr(cls, resource: dict) -> "RoutineArgument":
"""Factory: construct a routine argument given its API representation.
Args:
resource (Dict[str, object]): Resource, as returned from the API.
Returns:
google.cloud.bigquery.routine.RoutineArgument:
... | https://github.com/googleapis/python-bigquery/blob/HEAD/google/cloud/bigquery/routine/routine.py |
248129c1b6abf6b9 | docstring | pallets/click | def section(self, name: str) -> cabc.Iterator[None]:
Helpful context manager that writes a paragraph, a heading,
and the indents.
:param name: the section name that is written as heading. | def section(self, name: str) -> cabc.Iterator[None]:
"""Helpful context manager that writes a paragraph, a heading,
and the indents.
:param name: the section name that is written as heading.
"""
self.write_paragraph()
self.write_heading(name)
self.indent()
... | https://github.com/pallets/click/blob/HEAD/src/click/formatting.py |
a1a1d4a80bcc0604 | docstring | nltk/nltk | def _color_edge(self, edge, linecolor=None, textcolor=None):
Color in an edge with the given colors.
If no colors are specified, use intelligent defaults
(dependent on selection, etc.) | def _color_edge(self, edge, linecolor=None, textcolor=None):
"""
Color in an edge with the given colors.
If no colors are specified, use intelligent defaults
(dependent on selection, etc.)
"""
if edge not in self._edgetags:
return
c = self._chart_canva... | https://github.com/nltk/nltk/blob/HEAD/nltk/app/chartparser_app.py |
09bd16df1a4969bf | docstring | python-jsonschema/jsonschema | def validate(instance, schema, cls=None, *args, **kwargs): # noqa: D417
"""
Validate an instance under the given schema.
>>> validate([2, 3, 4], {"maxItems": 2})
Traceback (most recent call last):
Validate an instance under the given schema.
>>> validate([2, 3, 4], {"maxItems": 2})
T... | def validate(instance, schema, cls=None, *args, **kwargs): # noqa: D417
"""
Validate an instance under the given schema.
>>> validate([2, 3, 4], {"maxItems": 2})
Traceback (most recent call last):
...
ValidationError: [2, 3, 4] is too long
:func:`~jsonschema.validators... | https://github.com/python-jsonschema/jsonschema/blob/HEAD/jsonschema/validators.py |
30bab2f665bba369 | docstring | explosion/spaCy | def __init__(self, options: Dict[str, Any] = {}) -> None:
Initialise span renderer
options (dict): Visualiser-specific options (colors, spans)
Raises:
ValueError(Errors.E925.format(obj=type(user_color))) | def __init__(self, options: Dict[str, Any] = {}) -> None:
"""Initialise span renderer
options (dict): Visualiser-specific options (colors, spans)
"""
# Set up the colors and overall look
colors = dict(DEFAULT_LABEL_COLORS)
user_colors = registry.displacy_colors.get_all()... | https://github.com/explosion/spaCy/blob/HEAD/spacy/displacy/render.py |
dbbc5b4021415d66 | docstring | twilio/twilio-python | def update(self, friendly_name: str) -> IpAccessControlListInstance:
Update the IpAccessControlListInstance
:param friendly_name: A human readable descriptive text, up to 255 characters long.
:returns: The updated IpAccessControlListInstance | def update(self, friendly_name: str) -> IpAccessControlListInstance:
"""
Update the IpAccessControlListInstance
:param friendly_name: A human readable descriptive text, up to 255 characters long.
:returns: The updated IpAccessControlListInstance
"""
payload, _, _ = self... | https://github.com/twilio/twilio-python/blob/HEAD/twilio/rest/api/v2010/account/sip/ip_access_control_list/__init__.py |
918090adebc02e64 | github_issue | pallets/click | Fix launch() with locate=True for paths containing spaces on Windows
## Problem
On Windows, `click.launch(path, locate=True)` builds the command:
```
explorer /select,C:\My Documents\file.txt
```
When the path contains spaces, Windows Explorer misinterprets it and opens the wrong location or fails silently.
## Solu... | https://github.com/pallets/click/pull/3307 | |
58c711bbd891343d | docstring | tortoise/tortoise-orm | def get(self, conn_alias: str) -> BaseDBAsyncClient:
Return the connection object for the given alias, creating it if needed.
If the connection's event loop has changed (e.g., in a test with a new event loop),
the connection is transparently replaced with a fresh one and a
:class:`TortoiseLoopSwitchWarning<tortoise.w... | def get(self, conn_alias: str) -> BaseDBAsyncClient:
"""
Return the connection object for the given alias, creating it if needed.
If the connection's event loop has changed (e.g., in a test with a new event loop),
the connection is transparently replaced with a fresh one and a
:... | https://github.com/tortoise/tortoise-orm/blob/HEAD/tortoise/connection.py |
346e0da50e86c572 | github_issue | coleifer/peewee | Generation of string representation of Query as Python code instead of SQL
Would it be possible to get the code that generated the query back as a string from the query instance?
Thanks for the great tool! | https://github.com/coleifer/peewee/issues/3037 | |
894aa45d80711398 | docstring | sendgrid/sendgrid-python | def parse_email(self, email_info):
Allows passing emails as "Example Name <[email protected]>"
:param email_info: Allows passing emails as
"Example Name <[email protected]>"
:type email_info: string | def parse_email(self, email_info):
"""Allows passing emails as "Example Name <[email protected]>"
:param email_info: Allows passing emails as
"Example Name <[email protected]>"
:type email_info: string
"""
name, email = rfc822.parseaddr(email_info)... | https://github.com/sendgrid/sendgrid-python/blob/HEAD/sendgrid/helpers/mail/email.py |
4fe3369c7489765b | docstring | scipy/scipy | def nonzero(self):
Nonzero indices of the array/matrix.
Returns
-------
row : ndarray
Row indices of non-zero elements.
col : ndarray
Column indices of non-zero elements.
Examples
--------
>>> from scipy.sparse import csr_array
>>> A = csr_array([[1, 2, 0], [0, 0, 3], [4, 0, 5]])
>>> A.nonzero()
(array([0, 0... | def nonzero(self):
"""Nonzero indices of the array/matrix.
Returns
-------
row : ndarray
Row indices of non-zero elements.
col : ndarray
Column indices of non-zero elements.
Examples
--------
>>> from scipy.sparse import csr_array... | https://github.com/scipy/scipy/blob/HEAD/scipy/sparse/_base.py |
aaf80627cb2a0ddb | docstring | python-trio/trio | def from_thread_run_sync(
fn: Callable[[Unpack[Ts]], RetT],
*args: Unpack[Ts],
trio_token: TrioToken | None = None,
) -> RetT:
Run the given sync function in the parent Trio thread, blocking until it
is complete.
Returns:
Whatever ``fn(*args)`` returns.
Returns or raises whatever the given function ret... | def from_thread_run_sync(
fn: Callable[[Unpack[Ts]], RetT],
*args: Unpack[Ts],
trio_token: TrioToken | None = None,
) -> RetT:
"""Run the given sync function in the parent Trio thread, blocking until it
is complete.
Returns:
Whatever ``fn(*args)`` returns.
Returns or raises whatever ... | https://github.com/python-trio/trio/blob/HEAD/src/trio/_threads.py |
4901f501e98a1fee | docstring | psf/requests | def unicode_is_ascii(u_string):
Determine if unicode string only contains ASCII characters.
:param str u_string: unicode string to check. Must be unicode
and not Python 2 `str`.
:rtype: bool | def unicode_is_ascii(u_string):
"""Determine if unicode string only contains ASCII characters.
:param str u_string: unicode string to check. Must be unicode
and not Python 2 `str`.
:rtype: bool
"""
assert isinstance(u_string, str)
try:
u_string.encode("ascii")
return Tru... | https://github.com/psf/requests/blob/HEAD/src/requests/_internal_utils.py |
0aaf1b76bf9c8188 | docstring | mongodb/mongo-python-driver | def rename(
self, file_id: Any, new_filename: str, session: Optional[ClientSession] = None
) -> None:
Renames the stored file with the specified file_id.
For example::
my_db = MongoClient().test
fs = GridFSBucket(my_db)
# Get _id of file to rename
file_id = fs.upload_from_stream("test_file", "dat... | def rename(
self, file_id: Any, new_filename: str, session: Optional[ClientSession] = None
) -> None:
"""Renames the stored file with the specified file_id.
For example::
my_db = MongoClient().test
fs = GridFSBucket(my_db)
# Get _id of file to rename
... | https://github.com/mongodb/mongo-python-driver/blob/HEAD/gridfs/synchronous/grid_file.py |
d561ea3142e2fecf | docstring | prompt-toolkit/python-prompt-toolkit | def _history_matches(self, i: int) -> bool:
True when the current entry matches the history search.
(when we don't have history search, it's also True.) | def _history_matches(self, i: int) -> bool:
"""
True when the current entry matches the history search.
(when we don't have history search, it's also True.)
"""
return self.history_search_text is None or self._working_lines[i].startswith(
self.history_search_text
... | https://github.com/prompt-toolkit/python-prompt-toolkit/blob/HEAD/src/prompt_toolkit/buffer.py |
bc24d7029428f24b | docstring | dask/dask | def partition_by_size(sizes, seq):
>>> partition_by_size([10, 20, 10], [1, 5, 9, 12, 29, 35])
[array([1, 5, 9]), array([ 2, 19]), array([5])] | def partition_by_size(sizes, seq):
"""
>>> partition_by_size([10, 20, 10], [1, 5, 9, 12, 29, 35])
[array([1, 5, 9]), array([ 2, 19]), array([5])]
"""
if not is_arraylike(seq):
seq = np.asanyarray(seq)
left = np.empty(len(sizes) + 1, dtype=int)
left[0] = 0
right = np.cumsum(size... | https://github.com/dask/dask/blob/HEAD/dask/array/slicing.py |
49a051d1c580105f | docstring | pandas-dev/pandas | def _get_take_nd_function(
ndim: int,
arr_dtype: np.dtype,
out_dtype: np.dtype,
axis: AxisInt = 0,
mask_info=None,
):
Get the appropriate "take" implementation for the given dimension, axis
and dtypes. | def _get_take_nd_function(
ndim: int,
arr_dtype: np.dtype,
out_dtype: np.dtype,
axis: AxisInt = 0,
mask_info=None,
):
"""
Get the appropriate "take" implementation for the given dimension, axis
and dtypes.
"""
func = None
if ndim <= 2:
# for this part we don't need `m... | https://github.com/pandas-dev/pandas/blob/HEAD/pandas/core/array_algos/take.py |
be7079414f2cdaac | docstring | huggingface/datasets | def cast_column(self, column: str, feature: FeatureType) -> "IterableDataset":
Cast column to feature for decoding.
Args:
column (`str`):
Column name.
feature (`Feature`):
Target feature.
Returns:
`IterableDataset`
Example:
```py
>>> from datasets import load_dataset, Audio
>>> ds = loa... | def cast_column(self, column: str, feature: FeatureType) -> "IterableDataset":
"""Cast column to feature for decoding.
Args:
column (`str`):
Column name.
feature (`Feature`):
Target feature.
Returns:
`IterableDataset`
... | https://github.com/huggingface/datasets/blob/HEAD/src/datasets/iterable_dataset.py |
609ffdc1c7b2bea3 | docstring | pypa/pip | def __init__( # noqa: PLR0913, PLR0917
self,
appname: str | None = None,
appauthor: str | Literal[False] | None = None,
version: str | None = None,
roaming: bool = False, # noqa: FBT001, FBT002
multipath: bool = False, # noqa: FBT001, FBT002
opinion: bool = Tru... | def __init__( # noqa: PLR0913, PLR0917
self,
appname: str | None = None,
appauthor: str | Literal[False] | None = None,
version: str | None = None,
roaming: bool = False, # noqa: FBT001, FBT002
multipath: bool = False, # noqa: FBT001, FBT002
opinion: bool = Tru... | https://github.com/pypa/pip/blob/HEAD/src/pip/_vendor/platformdirs/api.py |
f6da58388ad5d8ad | docstring | astropy/astropy | def insert(arr, obj, values, axis=None):
Insert values along the given axis before the given indices.
Like `numpy.insert` but for possibly masked ``arr`` and ``values``.
Masked ``obj`` is not supported.
Raises:
NotImplementedError | def insert(arr, obj, values, axis=None):
"""Insert values along the given axis before the given indices.
Like `numpy.insert` but for possibly masked ``arr`` and ``values``.
Masked ``obj`` is not supported.
"""
from astropy.utils.masked import Masked
if isinstance(obj, Masked) or not isinstance... | https://github.com/astropy/astropy/blob/HEAD/astropy/utils/masked/function_helpers.py |
73250f0aed59f2bc | docstring | optuna/optuna | def plot_pareto_front(
study: Study,
*,
target_names: list[str] | None = None,
include_dominated_trials: bool = True,
axis_order: list[int] | None = None,
constraints_func: Callable[[FrozenTrial], Sequence[float]] | None = None,
targets: Callable[[FrozenTrial], Sequence[float]] | None = None... | def plot_pareto_front(
study: Study,
*,
target_names: list[str] | None = None,
include_dominated_trials: bool = True,
axis_order: list[int] | None = None,
constraints_func: Callable[[FrozenTrial], Sequence[float]] | None = None,
targets: Callable[[FrozenTrial], Sequence[float]] | None = None... | https://github.com/optuna/optuna/blob/HEAD/optuna/visualization/matplotlib/_pareto_front.py |
0d66883494e9f3db | docstring | lepture/authlib | def serialize_json(self, header_obj, payload, key):
Generate a JWS JSON Serialization. The JWS JSON Serialization
represents digitally signed or MACed content as a JSON object,
per `Section 7.2`_.
:param header_obj: A dict/list of header
:param payload: A string/dict of payload
:param key: Private key used to generat... | def serialize_json(self, header_obj, payload, key):
"""Generate a JWS JSON Serialization. The JWS JSON Serialization
represents digitally signed or MACed content as a JSON object,
per `Section 7.2`_.
:param header_obj: A dict/list of header
:param payload: A string/dict of paylo... | https://github.com/lepture/authlib/blob/HEAD/authlib/jose/rfc7515/jws.py |
3dee30319e21428e | docstring | elastic/elasticsearch-py | def get_transform_stats(
self,
*,
transform_id: t.Union[str, t.Sequence[str]],
allow_no_match: t.Optional[bool] = None,
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
from_: t.Optional[int] = None,
huma... | def get_transform_stats(
self,
*,
transform_id: t.Union[str, t.Sequence[str]],
allow_no_match: t.Optional[bool] = None,
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
from_: t.Optional[int] = None,
huma... | https://github.com/elastic/elasticsearch-py/blob/HEAD/elasticsearch/_sync/client/transform.py |
aa8fbded93c0efd7 | docstring | sqlalchemy/sqlalchemy | def last_updated_params(
self,
) -> Union[
List[_MutableCoreSingleExecuteParams], _MutableCoreSingleExecuteParams
]:
Return the collection of updated parameters from this
execution.
Raises :class:`~sqlalchemy.exc.InvalidRequestError` if the executed
statement is not a compiled expression const... | def last_updated_params(
self,
) -> Union[
List[_MutableCoreSingleExecuteParams], _MutableCoreSingleExecuteParams
]:
"""Return the collection of updated parameters from this
execution.
Raises :class:`~sqlalchemy.exc.InvalidRequestError` if the executed
statement ... | https://github.com/sqlalchemy/sqlalchemy/blob/HEAD/lib/sqlalchemy/engine/cursor.py |
e9563600adb174f6 | docstring | boto/boto3 | def _create_batch_action(
factory_self,
resource_name,
snake_cased,
action_model,
collection_model,
service_model,
event_emitter,
):
Creates a new method which makes a batch operation request
to the underlying service API. | def _create_batch_action(
factory_self,
resource_name,
snake_cased,
action_model,
collection_model,
service_model,
event_emitter,
):
"""
Creates a new method which makes a batch operation request
to the underlying service API.
"... | https://github.com/boto/boto3/blob/HEAD/boto3/resources/collection.py |
f14fdcef28598698 | docstring | prefecthq/prefect | def validate_block_type_slug(cls, values: Any) -> Any:
Validates that the `block_type_slug` in the input values matches the expected
block type slug for the class. This helps pydantic to correctly discriminate
between different Block subclasses when validating Union types of Blocks.
Raises:
ValueError(f"Invalid b... | def validate_block_type_slug(cls, values: Any) -> Any:
"""
Validates that the `block_type_slug` in the input values matches the expected
block type slug for the class. This helps pydantic to correctly discriminate
between different Block subclasses when validating Union types of Blocks.
... | https://github.com/prefecthq/prefect/blob/HEAD/src/prefect/blocks/core.py |
e8d230c12e1ac4df | docstring | twisted/twisted | def lineLengthExceeded(self, buffer):
Drop the connection when a server response exceeds the maximum line
length (L{LineOnlyReceiver.MAX_LENGTH}).
@type buffer: L{bytes}
@param buffer: A received line which exceeds the maximum line length. | def lineLengthExceeded(self, buffer):
"""
Drop the connection when a server response exceeds the maximum line
length (L{LineOnlyReceiver.MAX_LENGTH}).
@type buffer: L{bytes}
@param buffer: A received line which exceeds the maximum line length.
"""
# XXX - We need... | https://github.com/twisted/twisted/blob/HEAD/src/twisted/mail/_pop3client.py |
bc3d64ef78f26e74 | docstring | mongodb/mongo-python-driver | async def _refresh(self) -> int:
Refreshes the cursor with more data from the server.
Returns the length of self._data after refresh. Will exit early if
self._data is already non-empty. Raises OperationFailure when the
cursor cannot be refreshed due to an error on the query. | async def _refresh(self) -> int:
"""Refreshes the cursor with more data from the server.
Returns the length of self._data after refresh. Will exit early if
self._data is already non-empty. Raises OperationFailure when the
cursor cannot be refreshed due to an error on the query.
... | https://github.com/mongodb/mongo-python-driver/blob/HEAD/pymongo/asynchronous/command_cursor.py |
aada958ed5550c1f | docstring | elastic/elasticsearch-py | async def authenticate(
self,
*,
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
human: t.Optional[bool] = None,
pretty: t.Optional[bool] = None,
) -> ObjectApiResponse[t.Any]:
.. raw:: html
<p>Authenticate a us... | async def authenticate(
self,
*,
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
human: t.Optional[bool] = None,
pretty: t.Optional[bool] = None,
) -> ObjectApiResponse[t.Any]:
"""
.. raw:: html
... | https://github.com/elastic/elasticsearch-py/blob/HEAD/elasticsearch/_async/client/security.py |
661ad8839e2ee6ef | docstring | elastic/elasticsearch-py | async def azip(
*iterables: Union[Iterable[T], AsyncIterable[T]]
) -> AsyncIterable[Tuple[T, ...]]:
Zips async iterables and iterables into an async iterator
with the same behavior as zip() | async def azip(
*iterables: Union[Iterable[T], AsyncIterable[T]]
) -> AsyncIterable[Tuple[T, ...]]:
"""Zips async iterables and iterables into an async iterator
with the same behavior as zip()
"""
aiters = [aiter(x) for x in iterables]
try:
while True:
yield tuple([await x.__... | https://github.com/elastic/elasticsearch-py/blob/HEAD/elasticsearch/_async/helpers.py |
6f2a073c7186feea | github_issue | MagicStack/uvloop | fix: replace pkg_resources with packaging for setuptools 82+
## Summary
pkg_resources was removed from setuptools 82.0.0 (PEP 740, Feb 2026). This breaks uvloop installation when built from source with the latest setuptools.
## Fix
Replace pkg_resources.Requirement.parse() with the stdlib packaging.requirements.Req... | https://github.com/MagicStack/uvloop/pull/737 | |
618f3c2bd948a0d1 | docstring | elastic/elasticsearch-py | def put_alibabacloud(
self,
*,
task_type: t.Union[
str, t.Literal["completion", "rerank", "sparse_embedding", "text_embedding"]
],
alibabacloud_inference_id: str,
service: t.Optional[t.Union[str, t.Literal["alibabacloud-ai-search"]]] = None,
service_se... | def put_alibabacloud(
self,
*,
task_type: t.Union[
str, t.Literal["completion", "rerank", "sparse_embedding", "text_embedding"]
],
alibabacloud_inference_id: str,
service: t.Optional[t.Union[str, t.Literal["alibabacloud-ai-search"]]] = None,
service_se... | https://github.com/elastic/elasticsearch-py/blob/HEAD/elasticsearch/_sync/client/inference.py |
aa01e14120d08ee5 | docstring | google/jax | def _get_collective_metadata_size(num_params: int, num_peers: int) -> int:
Returns the size of the collective metadata buffer for the given number of parameters and peers. | def _get_collective_metadata_size(num_params: int, num_peers: int) -> int:
"""Returns the size of the collective metadata buffer for the given number of parameters and peers."""
return (
# Stores the collective metadata structure.
launch_context.COLLECTIVE_METADATA_SIZE
# For each peer we need to ... | https://github.com/google/jax/blob/HEAD/jax/experimental/mosaic/gpu/core.py |
74c7349e9a768d1f | docstring | twisted/twisted | def suppressWarnings(f, *suppressedWarnings):
Wrap C{f} in a callable which suppresses the indicated warnings before
invoking C{f} and unsuppresses them afterwards. If f returns a Deferred,
warnings will remain suppressed until the Deferred fires. | def suppressWarnings(f, *suppressedWarnings):
"""
Wrap C{f} in a callable which suppresses the indicated warnings before
invoking C{f} and unsuppresses them afterwards. If f returns a Deferred,
warnings will remain suppressed until the Deferred fires.
"""
@wraps(f)
def warningSuppressingWr... | https://github.com/twisted/twisted/blob/HEAD/src/twisted/internet/utils.py |
cc9f24c3b8ec47b3 | docstring | pydantic/pydantic | def validate_json(
self,
data: str | bytes | bytearray,
/,
*,
strict: bool | None = None,
extra: ExtraValues | None = None,
context: Any | None = None,
experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
by_alias: b... | def validate_json(
self,
data: str | bytes | bytearray,
/,
*,
strict: bool | None = None,
extra: ExtraValues | None = None,
context: Any | None = None,
experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
by_alias: b... | https://github.com/pydantic/pydantic/blob/HEAD/pydantic/type_adapter.py |
7a496d5f36904336 | docstring | nltk/nltk | def __init__(self, rules, chunk_label="NP", root_label="S", trace=0):
Construct a new ``RegexpChunkParser``.
:type rules: list(RegexpChunkRule)
:param rules: The sequence of rules that should be used to
generate the chunking for a tagged text.
:type chunk_label: str
:param chunk_label: The node value that should ... | def __init__(self, rules, chunk_label="NP", root_label="S", trace=0):
"""
Construct a new ``RegexpChunkParser``.
:type rules: list(RegexpChunkRule)
:param rules: The sequence of rules that should be used to
generate the chunking for a tagged text.
:type chunk_label: ... | https://github.com/nltk/nltk/blob/HEAD/nltk/chunk/regexp.py |
9596c75545b55ab0 | github_issue | paramiko/paramiko | fix: [BUG] wrong documentation on how known hosts are merged
Fixes #2572
The docstrings in `paramiko/client.py` (`load_host_keys`, `load_system_host_keys`) and `paramiko/hostkeys.py` (`HostKeys.load`) incorrectly stated that new entries replace existing ones during a merge conflict. The actual behavior preserves exis... | https://github.com/paramiko/paramiko/pull/2582 | |
9c8471eeafa49373 | docstring | huggingface/datasets | def get_from_cache(
url,
cache_dir=None,
force_download=False,
user_agent=None,
use_etag=True,
token=None,
storage_options=None,
download_desc=None,
disable_tqdm=False,
Given a URL, look for the corresponding file in the local cache.
If it's not there, download it. Then return the p... | def get_from_cache(
url,
cache_dir=None,
force_download=False,
user_agent=None,
use_etag=True,
token=None,
storage_options=None,
download_desc=None,
disable_tqdm=False,
) -> str:
"""
Given a URL, look for the corresponding file in the local cache.
If it's not there, downl... | https://github.com/huggingface/datasets/blob/HEAD/src/datasets/utils/file_utils.py |
f93ef45fc230d024 | docstring | sqlalchemy/alembic | def create_table_comment(
self,
table_name: str,
comment: Optional[str],
*,
existing_comment: Optional[str] = None,
schema: Optional[str] = None,
) -> None:
Emit a COMMENT ON operation to set the comment for a table.
:param table_name: st... | def create_table_comment(
self,
table_name: str,
comment: Optional[str],
*,
existing_comment: Optional[str] = None,
schema: Optional[str] = None,
) -> None:
"""Emit a COMMENT ON operation to set the comment for a table.
... | https://github.com/sqlalchemy/alembic/blob/HEAD/alembic/operations/base.py |
2c6c830975fa03fe | docstring | scikit-learn/scikit-learn | def _auto_wrap_is_configured(estimator):
Return True if estimator is configured for auto-wrapping the transform method.
`_SetOutputMixin` sets `_sklearn_auto_wrap_output_keys` to `set()` if auto wrapping
is manually disabled. | def _auto_wrap_is_configured(estimator):
"""Return True if estimator is configured for auto-wrapping the transform method.
`_SetOutputMixin` sets `_sklearn_auto_wrap_output_keys` to `set()` if auto wrapping
is manually disabled.
"""
auto_wrap_output_keys = getattr(estimator, "_sklearn_auto_wrap_out... | https://github.com/scikit-learn/scikit-learn/blob/HEAD/sklearn/utils/_set_output.py |
7176440df7f161a6 | docstring | spotify/luigi | def remove(self, path, recursive=True):
Remove a file or directory from S3.
:param path: File or directory to remove
:param recursive: Boolean indicator to remove object and children
:return: Boolean indicator denoting success of the removal of 1 or more files
Raises:
InvalidDeleteException('Cannot delete root of... | def remove(self, path, recursive=True):
"""
Remove a file or directory from S3.
:param path: File or directory to remove
:param recursive: Boolean indicator to remove object and children
:return: Boolean indicator denoting success of the removal of 1 or more files
"""
... | https://github.com/spotify/luigi/blob/HEAD/luigi/contrib/s3.py |
daacd1fcd3bd4a11 | docstring | googleapis/python-pubsub | def pre_list_schemas(
self,
request: schema.ListSchemasRequest,
metadata: Sequence[Tuple[str, Union[str, bytes]]],
) -> Tuple[schema.ListSchemasRequest, Sequence[Tuple[str, Union[str, bytes]]]]:
Pre-rpc interceptor for list_schemas
Override in a subclass to manipulate the request or metada... | def pre_list_schemas(
self,
request: schema.ListSchemasRequest,
metadata: Sequence[Tuple[str, Union[str, bytes]]],
) -> Tuple[schema.ListSchemasRequest, Sequence[Tuple[str, Union[str, bytes]]]]:
"""Pre-rpc interceptor for list_schemas
Override in a subclass to manipulate the... | https://github.com/googleapis/python-pubsub/blob/HEAD/google/pubsub_v1/services/schema_service/transports/rest.py |
e372628d2d217eee | github_issue | sqlalchemy/alembic | [Fixes #1761] Add opt-in autogenerate support for check constraints
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
This adds an opt-in compare_check_constraints configuration option for autogenerate, allowing Alembic to... | https://github.com/sqlalchemy/alembic/pull/1762 | |
3391d7c7b9e0ec31 | docstring | googleapis/python-pubsub | def get_iam_policy(
self,
) -> Callable[[iam_policy_pb2.GetIamPolicyRequest], policy_pb2.Policy]:
Return a callable for the get iam policy method over gRPC.
Gets the IAM access control policy for a function.
Returns an empty policy if the function exists and does
not have a policy set.
Returns:
Callabl... | def get_iam_policy(
self,
) -> Callable[[iam_policy_pb2.GetIamPolicyRequest], policy_pb2.Policy]:
r"""Return a callable for the get iam policy method over gRPC.
Gets the IAM access control policy for a function.
Returns an empty policy if the function exists and does
not have... | https://github.com/googleapis/python-pubsub/blob/HEAD/google/pubsub_v1/services/publisher/transports/grpc.py |
e8e00157f518e79a | docstring | matplotlib/matplotlib | def _rgb_to_rgba(A):
Convert an RGB image to RGBA, as required by the image resample C++
extension. | def _rgb_to_rgba(A):
"""
Convert an RGB image to RGBA, as required by the image resample C++
extension.
"""
rgba = np.zeros((A.shape[0], A.shape[1], 4), dtype=A.dtype)
rgba[:, :, :3] = A
if rgba.dtype == np.uint8:
rgba[:, :, 3] = 255
else:
rgba[:, :, 3] = 1.0
return r... | https://github.com/matplotlib/matplotlib/blob/HEAD/lib/matplotlib/image.py |
20e8de6c0d626d61 | docstring | google/flax | def _fingerprint_recursive(
obj: Any, path: tuple[str, ...], seen_modules: dict[FlaxId, int]
) -> Any:
Creates a hashable representation for a Module by traversing its structure recursively. | def _fingerprint_recursive(
obj: Any, path: tuple[str, ...], seen_modules: dict[FlaxId, int]
) -> Any:
"""Creates a hashable representation for a Module by traversing its structure recursively."""
def _get_fingerprint(name: str, value: Any) -> tuple[str, Any]:
return name, _fingerprint_recursive(value, (*pat... | https://github.com/google/flax/blob/HEAD/flax/linen/transforms.py |
cff03e6318e91d95 | docstring | sqlalchemy/sqlalchemy | def get_multi_columns(
self,
schema: Optional[str] = None,
filter_names: Optional[Sequence[str]] = None,
kind: ObjectKind = ObjectKind.TABLE,
scope: ObjectScope = ObjectScope.DEFAULT,
**kw: Any,
) -> Dict[TableKey, List[ReflectedColumn]]:
Return information about col... | def get_multi_columns(
self,
schema: Optional[str] = None,
filter_names: Optional[Sequence[str]] = None,
kind: ObjectKind = ObjectKind.TABLE,
scope: ObjectScope = ObjectScope.DEFAULT,
**kw: Any,
) -> Dict[TableKey, List[ReflectedColumn]]:
r"""Return informatio... | https://github.com/sqlalchemy/sqlalchemy/blob/HEAD/lib/sqlalchemy/engine/reflection.py |
18b6015615b66b35 | docstring | mongodb/mongo-python-driver | def _error_message(self, selector: Callable[[Selection], Selection]) -> str:
Format an error message if server selection fails.
Hold the lock when calling this. | def _error_message(self, selector: Callable[[Selection], Selection]) -> str:
"""Format an error message if server selection fails.
Hold the lock when calling this.
"""
is_replica_set = self._description.topology_type in (
TOPOLOGY_TYPE.ReplicaSetWithPrimary,
TOPO... | https://github.com/mongodb/mongo-python-driver/blob/HEAD/pymongo/asynchronous/topology.py |
96eab19a688fc3c4 | docstring | openai/openai-python | def __enter__(self) -> RealtimeConnection:
If your application doesn't work well with the context manager approach then you
can call this method directly to initiate a connection.
**Warning**: You must remember to close the connection with `.close()`.
```py
connection = client.beta.realtime.connect(...).enter()
# ... | def __enter__(self) -> RealtimeConnection:
"""
👋 If your application doesn't work well with the context manager approach then you
can call this method directly to initiate a connection.
**Warning**: You must remember to close the connection with `.close()`.
```py
conne... | https://github.com/openai/openai-python/blob/HEAD/src/openai/resources/beta/realtime/realtime.py |
c0b4c1f3ace51a56 | docstring | redis/redis-py | def delex(
self,
name: KeyT,
ifeq: bytes | str | None = None,
ifne: bytes | str | None = None,
ifdeq: str | None = None, # hex digest
ifdne: str | None = None, # hex digest
) -> int | Awaitable[int]:
Conditionally removes the specified key.
Warning:
**Experimental... | def delex(
self,
name: KeyT,
ifeq: bytes | str | None = None,
ifne: bytes | str | None = None,
ifdeq: str | None = None, # hex digest
ifdne: str | None = None, # hex digest
) -> int | Awaitable[int]:
"""
Conditionally removes the specified key.
... | https://github.com/redis/redis-py/blob/HEAD/redis/commands/core.py |
77cdef4298a844e0 | docstring | prefecthq/prefect | async def delete_flow_run(
self,
flow_run_id: "UUID",
) -> None:
Delete a flow run by UUID.
Args:
flow_run_id: The flow run UUID of interest.
Raises:
ObjectNotFound: If request returns 404
httpx.RequestError: If requests fails | async def delete_flow_run(
self,
flow_run_id: "UUID",
) -> None:
"""
Delete a flow run by UUID.
Args:
flow_run_id: The flow run UUID of interest.
Raises:
ObjectNotFound: If request returns 404
httpx.RequestError: If requests fails
... | https://github.com/prefecthq/prefect/blob/HEAD/src/prefect/client/orchestration/_flow_runs/client.py |
948c69f06fa31db1 | docstring | pydantic/pydantic | def get_has_default(stmt: AssignmentStmt) -> bool:
Returns a boolean indicating whether the field defined in `stmt` is a required field. | def get_has_default(stmt: AssignmentStmt) -> bool:
"""Returns a boolean indicating whether the field defined in `stmt` is a required field."""
expr = stmt.rvalue
if isinstance(expr, TempNode):
# TempNode means annotation-only, so has no default
return False
if isi... | https://github.com/pydantic/pydantic/blob/HEAD/pydantic/mypy.py |
5417b18911286aa8 | docstring | nltk/nltk | def mainloop(self, *args, **kwargs):
Enter the Tkinter mainloop. This function must be called if
this demo is created from a non-interactive program (e.g.
from a secript); otherwise, the demo will close as soon as
the script completes. | def mainloop(self, *args, **kwargs):
"""
Enter the Tkinter mainloop. This function must be called if
this demo is created from a non-interactive program (e.g.
from a secript); otherwise, the demo will close as soon as
the script completes.
"""
if in_idle():
... | https://github.com/nltk/nltk/blob/HEAD/nltk/app/chartparser_app.py |
b3a4fcf43d86a694 | docstring | Lightning-AI/pytorch-lightning | def load(
self,
path: Union[str, Path],
state: Optional[dict[str, Union[nn.Module, Optimizer, Any]]] = None,
strict: bool = True,
*,
weights_only: Optional[bool] = None,
) -> dict[str, Any]:
Load a checkpoint from a file and restore the state of objects (modules, opt... | def load(
self,
path: Union[str, Path],
state: Optional[dict[str, Union[nn.Module, Optimizer, Any]]] = None,
strict: bool = True,
*,
weights_only: Optional[bool] = None,
) -> dict[str, Any]:
"""Load a checkpoint from a file and restore the state of objects (mo... | https://github.com/Lightning-AI/pytorch-lightning/blob/HEAD/src/lightning/fabric/fabric.py |
cb16497ff9404796 | docstring | celery/celery | def fixup(app: "Celery", env: str = 'DJANGO_SETTINGS_MODULE') -> Optional["DjangoFixup"]:
Install Django fixup if settings module environment is set. | def fixup(app: "Celery", env: str = 'DJANGO_SETTINGS_MODULE') -> Optional["DjangoFixup"]:
"""Install Django fixup if settings module environment is set."""
SETTINGS_MODULE = os.environ.get(env)
if SETTINGS_MODULE and 'django' not in app.loader_cls.lower():
try:
import django
exce... | https://github.com/celery/celery/blob/HEAD/celery/fixups/django.py |
24cea3dabdbd1834 | docstring | googleapis/python-bigquery | def estimated_bytes_processed(self):
Return the estimated number of bytes processed by the query.
See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobStatistics2.FIELDS.estimated_bytes_processed
Returns:
Optional[int]:
number of DML rows affected by the job, or None if job is not
... | def estimated_bytes_processed(self):
"""Return the estimated number of bytes processed by the query.
See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobStatistics2.FIELDS.estimated_bytes_processed
Returns:
Optional[int]:
number of DML rows ... | https://github.com/googleapis/python-bigquery/blob/HEAD/google/cloud/bigquery/job/query.py |
c697a5849f9d5136 | github_issue | huggingface/accelerate | device_map="auto": silent corruption of tensors captured in register_forward_hook (3+ GPUs, inference_mode, stale bare references)
### System Info
```Shell
- **OS:** Linux 6.17.0-14-generic, x86_64, glibc 2.39 (Ubuntu 24.04 kernel line from `uname`)
- **Python:** 3.12.3
- **accelerate:** 1.13.0
- **torch:** 2.8.0+cu1... | https://github.com/huggingface/accelerate/issues/3995 | |
9ea439303c7b58fd | docstring | googleapis/python-bigquery | def to_arrow_iterable(
self,
bqstorage_client: Optional["bigquery_storage.BigQueryReadClient"] = None,
max_queue_size: int = _pandas_helpers._MAX_QUEUE_SIZE_DEFAULT, # type: ignore
max_stream_count: Optional[int] = None,
timeout: Optional[float] = None,
) -> Iterator["pyarro... | def to_arrow_iterable(
self,
bqstorage_client: Optional["bigquery_storage.BigQueryReadClient"] = None,
max_queue_size: int = _pandas_helpers._MAX_QUEUE_SIZE_DEFAULT, # type: ignore
max_stream_count: Optional[int] = None,
timeout: Optional[float] = None,
) -> Iterator["pyarro... | https://github.com/googleapis/python-bigquery/blob/HEAD/google/cloud/bigquery/table.py |
3c3398a7c8c8cc55 | docstring | numpy/numpy | def dtype_short_repr(dtype):
Convert a dtype to a short form which evaluates to the same dtype.
The intent is roughly that the following holds
>>> from numpy import *
>>> dt = np.int64([1, 2]).dtype
>>> assert eval(dtype_short_repr(dt)) == dt | def dtype_short_repr(dtype):
"""
Convert a dtype to a short form which evaluates to the same dtype.
The intent is roughly that the following holds
>>> from numpy import *
>>> dt = np.int64([1, 2]).dtype
>>> assert eval(dtype_short_repr(dt)) == dt
"""
if type(dtype).__repr__ != np.dtype... | https://github.com/numpy/numpy/blob/HEAD/numpy/_core/arrayprint.py |
e19930e79ea46a87 | docstring | matplotlib/matplotlib | def _draw_paths_with_artist_properties(
self, renderer, draw_path_args_list):
``draw()`` helper factored out for sharing with `FancyArrowPatch`.
Configure *renderer* and the associated graphics context *gc*
from the artist properties, then repeatedly call
``renderer.draw_path(gc, *draw_path_args)`` for ea... | def _draw_paths_with_artist_properties(
self, renderer, draw_path_args_list):
"""
``draw()`` helper factored out for sharing with `FancyArrowPatch`.
Configure *renderer* and the associated graphics context *gc*
from the artist properties, then repeatedly call
``rende... | https://github.com/matplotlib/matplotlib/blob/HEAD/lib/matplotlib/patches.py |
477264b459e3b3fc | docstring | elastic/elasticsearch-py | def params(self, **kwargs: Any) -> Self:
Specify query params to be used when executing the search. All the
keyword arguments will override the current values. See
https://elasticsearch-py.readthedocs.io/en/latest/api/elasticsearch.html#elasticsearch.Elasticsearch.search
for all available parameters.
Example::
s... | def params(self, **kwargs: Any) -> Self:
"""
Specify query params to be used when executing the search. All the
keyword arguments will override the current values. See
https://elasticsearch-py.readthedocs.io/en/latest/api/elasticsearch.html#elasticsearch.Elasticsearch.search
for ... | https://github.com/elastic/elasticsearch-py/blob/HEAD/elasticsearch/dsl/search_base.py |
bc0c564efee3ea5e | docstring | huggingface/datasets | def repeat(self, num_times: int) -> "Dataset":
Create a new [`Dataset`] that repeats the underlying dataset `num_times` times.
Like itertools.repeat, repeating once just returns the full dataset.
Args:
num_times (`int`):
Number of times to repeat the dataset.
Example:
```py
>>> from datasets import load... | def repeat(self, num_times: int) -> "Dataset":
"""
Create a new [`Dataset`] that repeats the underlying dataset `num_times` times.
Like itertools.repeat, repeating once just returns the full dataset.
Args:
num_times (`int`):
Number of times to repeat the dat... | https://github.com/huggingface/datasets/blob/HEAD/src/datasets/arrow_dataset.py |
0101c42b136b4475 | docstring | pallets/flask | def log_exception(
self,
ctx: AppContext,
exc_info: tuple[type, BaseException, TracebackType] | tuple[None, None, None],
) -> None:
Logs an exception. This is called by :meth:`handle_exception`
if debugging is disabled and right before the handler is called.
The default implementation logs... | def log_exception(
self,
ctx: AppContext,
exc_info: tuple[type, BaseException, TracebackType] | tuple[None, None, None],
) -> None:
"""Logs an exception. This is called by :meth:`handle_exception`
if debugging is disabled and right before the handler is called.
The d... | https://github.com/pallets/flask/blob/HEAD/src/flask/app.py |
d49963dccf36aa35 | docstring | dropbox/dropbox-sdk-python | def file_properties_templates_update_for_user(self,
template_id,
name=None,
description=None,
add_fields=None):
Update ... | def file_properties_templates_update_for_user(self,
template_id,
name=None,
description=None,
add_fields=None):
... | https://github.com/dropbox/dropbox-sdk-python/blob/HEAD/dropbox/base.py |
8e7597af4c383cf1 | docstring | twilio/twilio-python | async def page_with_http_info_async(
self,
to: Union[str, object] = values.unset,
from_: Union[str, object] = values.unset,
parent_call_sid: Union[str, object] = values.unset,
status: Union["CallInstance.Status", object] = values.unset,
start_time: Union[datetime, object]... | async def page_with_http_info_async(
self,
to: Union[str, object] = values.unset,
from_: Union[str, object] = values.unset,
parent_call_sid: Union[str, object] = values.unset,
status: Union["CallInstance.Status", object] = values.unset,
start_time: Union[datetime, object]... | https://github.com/twilio/twilio-python/blob/HEAD/twilio/rest/api/v2010/account/call/__init__.py |
7e3899ff04c126cf | docstring | aws/aws-cli | def walk(self, shape, visitor):
Walk through and visit shapes for introspection
:type shape: botocore.model.Shape
:param shape: Shape to walk
:type visitor: BaseShapeVisitor
:param visitor: The visitor to call when walking a shape | def walk(self, shape, visitor):
"""Walk through and visit shapes for introspection
:type shape: botocore.model.Shape
:param shape: Shape to walk
:type visitor: BaseShapeVisitor
:param visitor: The visitor to call when walking a shape
"""
if shape is None:
... | https://github.com/aws/aws-cli/blob/HEAD/awscli/utils.py |
0466a58ee2e65d9f | docstring | prefecthq/prefect | def read_work_queue_status(
self,
id: UUID,
) -> WorkQueueStatusDetail:
Read a work queue status.
Args:
id: the id of the work queue to load
Raises:
prefect.exceptions.ObjectNotFound: If request returns 404
httpx.RequestError: If request fails
Returns:
WorkQueueStatus: an instant... | def read_work_queue_status(
self,
id: UUID,
) -> WorkQueueStatusDetail:
"""
Read a work queue status.
Args:
id: the id of the work queue to load
Raises:
prefect.exceptions.ObjectNotFound: If request returns 404
httpx.RequestError:... | https://github.com/prefecthq/prefect/blob/HEAD/src/prefect/client/orchestration/__init__.py |
713eea8d84ab3ea8 | docstring | huggingface/accelerate | def get_logger(name: str, log_level: str | None = None):
Returns a `logging.Logger` for `name` that can handle multiprocessing.
If a log should be called on all processes, pass `main_process_only=False` If a log should be called on all
processes and in order, also pass `in_order=True`
Args:
name (`str`):
... | def get_logger(name: str, log_level: str | None = None):
"""
Returns a `logging.Logger` for `name` that can handle multiprocessing.
If a log should be called on all processes, pass `main_process_only=False` If a log should be called on all
processes and in order, also pass `in_order=True`
Args:
... | https://github.com/huggingface/accelerate/blob/HEAD/src/accelerate/logging.py |
387f502be97f2838 | docstring | elastic/elasticsearch-py | def delete(
self,
*,
id: str,
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
human: t.Optional[bool] = None,
pretty: t.Optional[bool] = None,
) -> ObjectApiResponse[t.Any]:
.. raw:: html
<p>Delete an as... | def delete(
self,
*,
id: str,
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
human: t.Optional[bool] = None,
pretty: t.Optional[bool] = None,
) -> ObjectApiResponse[t.Any]:
"""
.. raw:: html... | https://github.com/elastic/elasticsearch-py/blob/HEAD/elasticsearch/_sync/client/async_search.py |
ff8a70611a393f66 | docstring | elastic/elasticsearch-py | def delete_repository(
self,
*,
name: t.Union[str, t.Sequence[str]],
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
human: t.Optional[bool] = None,
master_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Liter... | def delete_repository(
self,
*,
name: t.Union[str, t.Sequence[str]],
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
human: t.Optional[bool] = None,
master_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Liter... | https://github.com/elastic/elasticsearch-py/blob/HEAD/elasticsearch/_sync/client/snapshot.py |
f1ee69ad222c5b53 | docstring | huggingface/peft | def _remove_adapted_attentions(self, adapter_name: str) -> None:
Remove AdaptedAttention modules from the model and store them in the cache. | def _remove_adapted_attentions(self, adapter_name: str) -> None:
"""Remove AdaptedAttention modules from the model and store them in the cache."""
config = self.peft_config[adapter_name]
adapted_attentions = []
for par in self._parents[adapter_name]:
attn = getattr(par, confi... | https://github.com/huggingface/peft/blob/HEAD/src/peft/tuners/adaption_prompt/model.py |
234ce77f6a8b98e4 | docstring | astropy/astropy | def get_config_filename(packageormod=None, rootname=None):
Get the filename of the config file associated with the given
package or module. | def get_config_filename(packageormod=None, rootname=None):
"""
Get the filename of the config file associated with the given
package or module.
"""
cfg = get_config(packageormod, rootname=rootname)
while cfg.parent is not cfg:
cfg = cfg.parent
return cfg.filename | https://github.com/astropy/astropy/blob/HEAD/astropy/config/configuration.py |
End of preview. Expand in Data Studio
spec-verify dataset v2
Description
This is the v2 dataset on the Hugging Face Hub (spec-verify-dataset-v2), built for 10,000+ high-quality (specification, code) pairs.
- Quality filtered: non-English and low-signal rows are removed using heuristics (ASCII/Latin ratio, repetition, minimum word counts, valid docstring shape for Python rows).
- English-oriented: specifications are cleaned to ASCII/Latin-heavy text; issue/PR and docstring rows must pass English dominance checks.
- Real code for docstring rows: the
docstringsubset requires substantial Python implementation (not signature-only), including logic keywords and length checks. - GitHub issues / PRs: natural-language specifications only (
codemay be empty); filtered for English and non-repetitive text.
Pairs are mined from public GitHub repositories. Intended for spec-to-code, code-to-spec, and verification research.
Data fields
| Column | Meaning |
|---|---|
id |
Stable hash identifier for the row |
source |
github_issue (includes PR descriptions in this schema) or docstring |
repo |
Source repository in owner/name form |
specification |
English-cleaned NL spec (issue/PR text, or signature + docstring + optional structured notes from extraction) |
code |
Full function implementation for docstring rows; often empty for github_issue |
url |
Link to the issue, pull request, or file on GitHub |
Example record
{
"id": "d3b066c9609ab8e6",
"source": "github_issue",
"repo": "aio-libs/aiohttp",
"specification": "GunicornWebWorker fails to reset SIGCHLD causing spurious \"Worker exited\" errors\n\n### Describe the bug\n\nThe `aiohttp.worker.GunicornWebWorker` class overrides `init_signals()` but fails to reset SIGCHLD to SIG_DFL. This causes workers to inherit the gunicorn master arbiter's SIGCHLD handler, leading to spurious error logs when application code spawns subprocesses.\n\n## Evidence\n\n### 1. aiohttp/worker.py has incomplete fix\nLines 191-192 contain a comment about resetting signals, but NO implementation:\n\n```python\n# Reset signals so Gunicorn doesn't swallow subprocess return codes\n# See: https://github.com/aio-libs/aiohttp/issues/6130\n# CODE IS MISSING HERE!\n```\n\n### 2. Base Worker resets SIGCHLD correctly\n`gunicorn/workers/base.py` properly resets SIGCHLD:\n\n```python\nSIGNALS = [\"ABRT\", \"HUP\", \"QUIT\", \"INT\", \"TERM\", \"USR1\", \"USR2\", \"WINCH\", \"CHLD\"]\n\ndef init_signals(self):\n # reset signaling\n for s in self.SIGNALS:\n signal.signal(s, signal.SIG_DFL) # Resets SIGCHLD\n```\n\n### 3. aiohttp Worker doesn't call super()\n`aiohttp/worker.py:160-193` overrides `init_signals()` but never calls `super().init_signals()`, so SIGCHLD is never reset.\n\n### To Reproduce\n\n1.Install gunicorn and aiohttp latest versions\n2.Run this code from below, see usage inside\n```\n#!/usr/bin/env python3\n\"\"\"\nReproduction of aiohttp GunicornWebWorker SIGCHLD inheritance bug.\n\nUsage:\n python aiohttp_sigchld_bug.py # Show the bug\n python aiohttp_sigchld_bug.py --fixed # Show with fix\n\nBug: Workers inherit master's SIGCHLD handler, logging spurious errors\nabout subprocess PIDs as if they were worker PIDs.\n\nExpected: Clean worker startup\nActual (without fix): ERROR logs like \"Worker (pid:X) exited with code 1\"\nwhere X never appears in \"Booting worker\" logs.\n\"\"\"\nimport sys\n\n# Run with gunicorn when executed directly\nif __name__ == \"__main__\":\n import os\n import subprocess\n import tempfile\n\n apply_fix = \"--fixed\" in sys.argv\n\n # Create a separate module that spawns subprocesses during import\n submodule_content = \"\"\"\nimport subprocess\nimport time\n\n# Spawn subprocesses during module import to trigger SIGCHLD\n# This simulates real-world code that checks versions, calls git, dpkg, etc.\nprocesses = []\nfor i in range(5):\n # Spawn subprocess that exits with code 1\n p = subprocess.Popen(['false'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n processes.append(p)\n\n# Small delay creates window where:\n# 1. Subprocesses exit and become zombies\n# 2. SIGCHLD is delivered to worker\n# 3. If worker has inherited arbiter's handler, it reaps OTHER workers' zombies\ntime.sleep(0.05)\n\n# Clean up our zombies\nfor p in processes:\n p.wait()\n\"\"\"\n\n # Create the main app module\n app_content = \"\"\"\n# Import submodule that spawns subprocesses (triggers bug)\nimport subprocess_module\n\nfrom aiohttp import web\n\nasync def hello(request):\n return web.Response(text=\"Hello World\\\\n\")\n\nasync def application():\n app = web.Application()\n app.router.add_get(\"/\", hello)\n return app\n\"\"\"\n\n # Create config with optional fix\n config_content = \"\"\"\nworkers = 4\nworker_class = \"aiohttp.worker.GunicornWebWorker\"\nbind = \"127.0.0.1:8765\"\nloglevel = \"info\"\n\"\"\"\n\n if appl
… (truncated for card)
Intended use
- Training and evaluating models that generate formal or behavioral specs from natural language
- Spec-to-code or code-to-spec alignment with implementation-heavy Python for docstring-sourced rows
- Test / verification research conditioned on specifications
Limitations
- Sources are public GitHub data; quality and licensing follow upstream repositories.
- Heuristic filters favor English and Latin script; they do not guarantee semantic correctness.
- Older v1 uploads may use a different schema; this repo is v2 (
spec-verify-dataset-v2).
Citation
If you use this dataset, cite the upstream repositories and acknowledge the spec-verify extraction pipeline.
- Downloads last month
- 54